Sample records for commands control robot

  1. Robot Task Commander with Extensible Programming Environment

    NASA Technical Reports Server (NTRS)

    Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)

    2014-01-01

    A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.

  2. Towards Human-Friendly Efficient Control of Multi-Robot Teams

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Theodoridis, Theodoros; Barrero, David F.; Hu, Huosheng; McDonald-Maiers, Klaus

    2013-01-01

    This paper explores means to increase efficiency in performing tasks with multi-robot teams, in the context of natural Human-Multi-Robot Interfaces (HMRI) for command and control. The motivating scenario is an emergency evacuation by a transport convoy of unmanned ground vehicles (UGVs) that have to traverse, in shortest time, an unknown terrain. In the experiments the operator commands, in minimal time, a group of rovers through a maze. The efficiency of performing such tasks depends on both, the levels of robots' autonomy, and the ability of the operator to command and control the team. The paper extends the classic framework of levels of autonomy (LOA), to levels/hierarchy of autonomy characteristic of Groups (G-LOA), and uses it to determine new strategies for control. An UGVoriented command language (UGVL) is defined, and a mapping is performed from the human-friendly gesture-based HMRI into the UGVL. The UGVL is used to control a team of 3 robots, exploring the efficiency of different G-LOA; specifically, by (a) controlling each robot individually through the maze, (b) controlling a leader and cloning its controls to followers, and (c) controlling the entire group. Not surprisingly, commands at increased G-LOA lead to a faster traverse, yet a number of aspects are worth discussing in this context.

  3. Effect of motor dynamics on nonlinear feedback robot arm control

    NASA Technical Reports Server (NTRS)

    Tarn, Tzyh-Jong; Li, Zuofeng; Bejczy, Antal K.; Yun, Xiaoping

    1991-01-01

    A nonlinear feedback robot controller that incorporates the robot manipulator dynamics and the robot joint motor dynamics is proposed. The manipulator dynamics and the motor dynamics are coupled to obtain a third-order-dynamic model, and differential geometric control theory is applied to produce a linearized and decoupled robot controller. The derived robot controller operates in the robot task space, thus eliminating the need for decomposition of motion commands into robot joint space commands. Computer simulations are performed to verify the feasibility of the proposed robot controller. The controller is further experimentally evaluated on the PUMA 560 robot arm. The experiments show that the proposed controller produces good trajectory tracking performances and is robust in the presence of model inaccuracies. Compared with a nonlinear feedback robot controller based on the manipulator dynamics only, the proposed robot controller yields conspicuously improved performance.

  4. Redundant arm control in a supervisory and shared control system

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Long, Mark K.

    1992-01-01

    The Extended Task Space Control approach to robotic operations based on manipulator behaviors derived from task requirements is described. No differentiation between redundant and non-redundant robots is made at the task level. The manipulation task behaviors are combined into a single set of motion commands. The manipulator kinematics are used subsequently in mapping motion commands into actuator commands. Extended Task Space Control is applied to a Robotics Research K-1207 seven degree-of-freedom manipulator in a supervisory telerobot system as an example.

  5. Generic command interpreter for robot controllers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, J.

    1991-04-09

    Generic command interpreter programs have been written for robot controllers at Sandia National Laboratories (SNL). Each interpreter program resides on a robot controller and interfaces the controller with a supervisory program on another (host) computer. We call these interpreter programs monitors because they wait, monitoring a communication line, for commands from the supervisory program. These monitors are designed to interface with the object-oriented software structure of the supervisory programs. The functions of the monitor programs are written in each robot controller's native language but reflect the object-oriented functions of the supervisory programs. These functions and other specifics of the monitormore » programs written for three different robots at SNL will be discussed. 4 refs., 4 figs.« less

  6. TRICCS: A proposed teleoperator/robot integrated command and control system for space applications

    NASA Technical Reports Server (NTRS)

    Will, R. W.

    1985-01-01

    Robotic systems will play an increasingly important role in space operations. An integrated command and control system based on the requirements of space-related applications and incorporating features necessary for the evolution of advanced goal-directed robotic systems is described. These features include: interaction with a world model or domain knowledge base, sensor feedback, multiple-arm capability and concurrent operations. The system makes maximum use of manual interaction at all levels for debug, monitoring, and operational reliability. It is shown that the robotic command and control system may most advantageously be implemented as packages and tasks in Ada.

  7. Fast Grasp Contact Computation for a Serial Robot

    NASA Technical Reports Server (NTRS)

    Hargrave, Brian (Inventor); Shi, Jianying (Inventor); Diftler, Myron A. (Inventor)

    2015-01-01

    A system includes a controller and a serial robot having links that are interconnected by a joint, wherein the robot can grasp a three-dimensional (3D) object in response to a commanded grasp pose. The controller receives input information, including the commanded grasp pose, a first set of information describing the kinematics of the robot, and a second set of information describing the position of the object to be grasped. The controller also calculates, in a two-dimensional (2D) plane, a set of contact points between the serial robot and a surface of the 3D object needed for the serial robot to achieve the commanded grasp pose. A required joint angle is then calculated in the 2D plane between the pair of links using the set of contact points. A control action is then executed with respect to the motion of the serial robot using the required joint angle.

  8. Design of multifunction anti-terrorism robotic system based on police dog

    NASA Astrophysics Data System (ADS)

    You, Bo; Liu, Suju; Xu, Jun; Li, Dongjie

    2007-11-01

    Aimed at some typical constraints of police dogs and robots used in the areas of reconnaissance and counterterrorism currently, the multifunction anti-terrorism robotic system based on police dog has been introduced. The system is made up of two parts: portable commanding device and police dog robotic system. The portable commanding device consists of power supply module, microprocessor module, LCD display module, wireless data receiving and dispatching module and commanding module, which implements the remote control to the police dogs and takes real time monitor to the video and images. The police dog robotic system consists of microprocessor module, micro video module, wireless data transmission module, power supply module and offence weapon module, which real time collects and transmits video and image data of the counter-terrorism sites, and gives military attack based on commands. The system combines police dogs' biological intelligence with micro robot. Not only does it avoid the complexity of general anti-terrorism robots' mechanical structure and the control algorithm, but it also widens the working scope of police dog, which meets the requirements of anti-terrorism in the new era.

  9. Squad-Level Soldier-Robot Dynamics: Exploring Future Concepts Involving Intelligent Autonomous Robots

    DTIC Science & Technology

    2015-02-01

    unanimous for the run and duck commands as other commands commonly used. The verbal commands surveyed, as well as other suggested verbal commands that...stop, and duck . Additional verbal commands suggested were shut down, follow, destroy, status, and move out. The verbal commands surveyed and the...identify the verbal commands you would use to control the squad and the ASM: Phrase Yes No Halt 9 3 Stop 9 3 Move 11 1 Run 7 5 Duck 6 6 Other

  10. Overcoming Robot-Arm Joint Singularities

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Houck, J. A.

    1986-01-01

    Kinematic equations allow arm to pass smoothly through singular region. Report discusses mathematical singularities in equations of robotarm control. Operator commands robot arm to move in direction relative to its own axis system by specifying velocity in that direction. Velocity command then resolved into individual-joint rotational velocities in robot arm to effect motion. However, usual resolved-rate equations become singular when robot arm is straightened.

  11. Design and experimental validation of a simple controller for a multi-segment magnetic crawler robot

    NASA Astrophysics Data System (ADS)

    Kelley, Leah; Ostovari, Saam; Burmeister, Aaron B.; Talke, Kurt A.; Pezeshkian, Narek; Rahimi, Amin; Hart, Abraham B.; Nguyen, Hoa G.

    2015-05-01

    A novel, multi-segmented magnetic crawler robot has been designed for ship hull inspection. In its simplest version, passive linkages that provide two degrees of relative motion connect front and rear driving modules, so the robot can twist and turn. This permits its navigation over surface discontinuities while maintaining its adhesion to the hull. During operation, the magnetic crawler receives forward and turning velocity commands from either a tele-operator or high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the development of a simple, low-level, leader-follower controller that permits the rear module to follow the front module. The kinematics and dynamics of the two-module magnetic crawler robot are described. The robot's geometry, kinematic constraints and the user-commanded velocities are used to calculate the desired instantaneous center of rotation and the corresponding central-linkage angle necessary for the back module to follow the front module when turning. The commands to the rear driving motors are determined by applying PID control on the error between the desired and measured linkage angle position. The controller is designed and tested using Matlab Simulink. It is then implemented and tested on an early two-module magnetic crawler prototype robot. Results of the simulations and experimental validation of the controller design are presented.

  12. INL Multi-Robot Control Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The INL Multi-Robot Control Interface controls many robots through a single user interface. The interface includes a robot display window for each robot showing the robot’s condition. More than one window can be used depending on the number of robots. The user interface also includes a robot control window configured to receive commands for sending to the respective robot and a multi-robot common window showing information received from each robot.

  13. Survey of Command Execution Systems for NASA Spacecraft and Robots

    NASA Technical Reports Server (NTRS)

    Verma, Vandi; Jonsson, Ari; Simmons, Reid; Estlin, Tara; Levinson, Rich

    2005-01-01

    NASA spacecraft and robots operate at long distances from Earth Command sequences generated manually, or by automated planners on Earth, must eventually be executed autonomously onboard the spacecraft or robot. Software systems that execute commands onboard are known variously as execution systems, virtual machines, or sequence engines. Every robotic system requires some sort of execution system, but the level of autonomy and type of control they are designed for varies greatly. This paper presents a survey of execution systems with a focus on systems relevant to NASA missions.

  14. Laboratory testing of candidate robotic applications for space

    NASA Technical Reports Server (NTRS)

    Purves, R. B.

    1987-01-01

    Robots have potential for increasing the value of man's presence in space. Some categories with potential benefit are: (1) performing extravehicular tasks like satellite and station servicing, (2) supporting the science mission of the station by manipulating experiment tasks, and (3) performing intravehicular activities which would be boring, tedious, exacting, or otherwise unpleasant for astronauts. An important issue in space robotics is selection of an appropriate level of autonomy. In broad terms three levels of autonomy can be defined: (1) teleoperated - an operator explicitly controls robot movement; (2) telerobotic - an operator controls the robot directly, but by high-level commands, without, for example, detailed control of trajectories; and (3) autonomous - an operator supplies a single high-level command, the robot does all necessary task sequencing and planning to satisfy the command. Researchers chose three projects for their exploration of technology and implementation issues in space robots, one each of the three application areas, each with a different level of autonomy. The projects were: (1) satellite servicing - teleoperated; (2) laboratory assistant - telerobotic; and (3) on-orbit inventory manager - autonomous. These projects are described and some results of testing are summarized.

  15. Remotely controlling of mobile robots using gesture captured by the Kinect and recognized by machine learning method

    NASA Astrophysics Data System (ADS)

    Hsu, Roy CHaoming; Jian, Jhih-Wei; Lin, Chih-Chuan; Lai, Chien-Hung; Liu, Cheng-Ting

    2013-01-01

    The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.

  16. Compliance control with embedded neural elements

    NASA Technical Reports Server (NTRS)

    Venkataraman, S. T.; Gulati, S.

    1992-01-01

    The authors discuss a control approach that embeds the neural elements within a model-based compliant control architecture for robotic tasks that involve contact with unstructured environments. Compliance control experiments have been performed on actual robotics hardware to demonstrate the performance of contact control schemes with neural elements. System parameters were identified under the assumption that environment dynamics have a fixed nonlinear structure. A robotics research arm, placed in contact with a single degree-of-freedom electromechanical environment dynamics emulator, was commanded to move through a desired trajectory. The command was implemented by using a compliant control strategy.

  17. Torque Control of Underactuated Tendon-driven Robotic Fingers

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Wampler, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor); Reiland, Matthew J. (Inventor); Diftler, Myron A. (Inventor); Bridgwater, Lyndon (Inventor); Platt, Robert (Inventor)

    2013-01-01

    A robotic system includes a robot having a total number of degrees of freedom (DOF) equal to at least n, an underactuated tendon-driven finger driven by n tendons and n DOF, the finger having at least two joints, being characterized by an asymmetrical joint radius in one embodiment. A controller is in communication with the robot, and controls actuation of the tendon-driven finger using force control. Operating the finger with force control on the tendons, rather than position control, eliminates the unconstrained slack-space that would have otherwise existed. The controller may utilize the asymmetrical joint radii to independently command joint torques. A method of controlling the finger includes commanding either independent or parameterized joint torques to the controller to actuate the fingers via force control on the tendons.

  18. Research into command, control, and communications in space construction

    NASA Technical Reports Server (NTRS)

    Davis, Randal

    1990-01-01

    Coordinating and controlling large numbers of autonomous or semi-autonomous robot elements in a space construction activity will present problems that are very different from most command and control problems encountered in the space business. As part of our research into the feasibility of robot constructors in space, the CSC Operations Group is examining a variety of command, control, and communications (C3) issues. Two major questions being asked are: can we apply C3 techniques and technologies already developed for use in space; and are there suitable terrestrial solutions for extraterrestrial C3 problems? An overview of the control architectures, command strategies, and communications technologies that we are examining is provided and plans for simulations and demonstrations of our concepts are described.

  19. Translational control of a graphically simulated robot arm by kinematic rate equations that overcome elbow joint singularity

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Houck, J. A.; Carzoo, S. W.

    1984-01-01

    An operator commands a robot hand to move in a certain direction relative to its own axis system by specifying a velocity in that direction. This velocity command is then resolved into individual joint rotational velocities in the robot arm to effect the motion. However, the usual resolved-rate equations become singular when the robot arm is straightened. To overcome this elbow joint singularity, equations were developed which allow continued translational control of the robot hand even though the robot arm is (or is nearly) fully extended. A feature of the equations near full arm extension is that an operator simply extends and retracts the robot arm to reverse the direction of the elbow bend (difficult maneuver for the usual resolved-rate equations). Results show successful movement of a graphically simulated robot arm.

  20. A learning controller for nonrepetitive robotic operation

    NASA Technical Reports Server (NTRS)

    Miller, W. T., III

    1987-01-01

    A practical learning control system is described which is applicable to complex robotic and telerobotic systems involving multiple feedback sensors and multiple command variables. In the controller, the learning algorithm is used to learn to reproduce the nonlinear relationship between the sensor outputs and the system command variables over particular regions of the system state space, rather than learning the actuator commands required to perform a specific task. The learned information is used to predict the command signals required to produce desired changes in the sensor outputs. The desired sensor output changes may result from automatic trajectory planning or may be derived from interactive input from a human operator. The learning controller requires no a priori knowledge of the relationships between the sensor outputs and the command variables. The algorithm is well suited for real time implementation, requiring only fixed point addition and logical operations. The results of learning experiments using a General Electric P-5 manipulator interfaced to a VAX-11/730 computer are presented. These experiments involved interactive operator control, via joysticks, of the position and orientation of an object in the field of view of a video camera mounted on the end of the robot arm.

  1. Exact nonlinear command generation and tracking for robot manipulators and spacecraft slewing maneuvers

    NASA Technical Reports Server (NTRS)

    Dywer, T. A. W., III; Lee, G. K. F.

    1984-01-01

    In connection with the current interest in agile spacecraft maneuvers, it has become necessary to consider the nonlinear coupling effects of multiaxial rotation in the treatment of command generation and tracking problems. Multiaxial maneuvers will be required in military missions involving a fast acquisition of moving targets in space. In addition, such maneuvers are also needed for the efficient operation of robot manipulators. Attention is given to details regarding the direct nonlinear command generation and tracking, an approach which has been successfully applied to the design of control systems for V/STOL aircraft, linearizing transformations for spacecraft controlled with external thrusters, the case of flexible spacecraft dynamics, examples from robot dynamics, and problems of implementation and testing.

  2. Finite State Machine with Adaptive Electromyogram (EMG) Feature Extraction to Drive Meal Assistance Robot

    NASA Astrophysics Data System (ADS)

    Zhang, Xiu; Wang, Xingyu; Wang, Bei; Sugi, Takenao; Nakamura, Masatoshi

    Surface electromyogram (EMG) from elbow, wrist and hand has been widely used as an input of multifunction prostheses for many years. However, for patients with high-level limb deficiencies, muscle activities in upper-limbs are not strong enough to be used as control signals. In this paper, EMG from lower-limbs is acquired and applied to drive a meal assistance robot. An onset detection method with adaptive threshold based on EMG power is proposed to recognize different muscle contractions. Predefined control commands are output by finite state machine (FSM), and applied to operate the robot. The performance of EMG control is compared with joystick control by both objective and subjective indices. The results show that FSM provides the user with an easy-performing control strategy, which successfully operates robots with complicated control commands by limited muscle motions. The high accuracy and comfortableness of the EMG-control meal assistance robot make it feasible for users with upper limbs motor disabilities.

  3. A satellite orbital testbed for SATCOM using mobile robots

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Lu, Wenjie; Wang, Zhonghai; Jia, Bin; Wang, Gang; Wang, Tao; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2016-05-01

    This paper develops and evaluates a satellite orbital testbed (SOT) for satellite communications (SATCOM). SOT can emulate the 3D satellite orbit using the omni-wheeled robots and a robotic arm. The 3D motion of satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The former actions are emulated by omni-wheeled robots while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. The emulated satellite positions will go to the measure model, whose results will be used to perform multiple space object tracking. Then the tracking results will go to the maneuver detection and collision alert. The satellite maneuver commands will be translated to robots commands and robotic arm commands. In SATCOM, the effects of jamming depend on the range and angles of the positions of satellite transponder relative to the jamming satellite. We extend the SOT to include USRP transceivers. In the extended SOT, the relative ranges and angles are implemented using omni-wheeled robots and robotic arms.

  4. Human-Robot Interaction Directed Research Project

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, Ernest V., II; Chang, M. L.

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot. HRP GAPS This HRI research contributes to closure of HRP gaps by providing information on how display and control characteristics - those related to guidance, feedback, and command modalities - affect operator performance. The overarching goals are to improve interface usability, reduce operator error, and develop candidate guidelines to design effective human-robot interfaces.

  5. A graphical, rule based robotic interface system

    NASA Technical Reports Server (NTRS)

    Mckee, James W.; Wolfsberger, John

    1988-01-01

    The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.

  6. Gesture-Based Robot Control with Variable Autonomy from the JPL Biosleeve

    NASA Technical Reports Server (NTRS)

    Wolf, Michael T.; Assad, Christopher; Vernacchia, Matthew T.; Fromm, Joshua; Jethani, Henna L.

    2013-01-01

    This paper presents a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via a novel device, called the BioSleeve, which packages dry-contact surface electromyography (EMG) and an inertial measurement unit (IMU) into a sleeve worn on the forearm. The BioSleeve's accompanying algorithms can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. These gestures and positions are mapped to robot commands that, to varying degrees, integrate with the robot's perception of its environment and its ability to complete tasks autonomously. This flexible approach enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. The BioSleeve is meant for portable field use; unlike other gesture recognition systems, use of the BioSleeve for robot control is invariant to lighting conditions, occlusions, and the human-robot spatial relationship and does not encumber the user's hands. The BioSleeve control approach has been implemented on three robot types, and we present proof-of-principle demonstrations with mobile ground robots, manipulation robots, and prosthetic hands.

  7. Robotics research projects report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsia, T.C.

    The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)

  8. Autonomous intelligent assembly systems LDRD 105746 final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2013-04-01

    This report documents a three-year to develop technology that enables mobile robots to perform autonomous assembly tasks in unstructured outdoor environments. This is a multi-tier problem that requires an integration of a large number of different software technologies including: command and control, estimation and localization, distributed communications, object recognition, pose estimation, real-time scanning, and scene interpretation. Although ultimately unsuccessful in achieving a target brick stacking task autonomously, numerous important component technologies were nevertheless developed. Such technologies include: a patent-pending polygon snake algorithm for robust feature tracking, a color grid algorithm for uniquely identification and calibration, a command and control frameworkmore » for abstracting robot commands, a scanning capability that utilizes a compact robot portable scanner, and more. This report describes this project and these developed technologies.« less

  9. Multi-robot control interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruemmer, David J; Walton, Miles C

    Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes amore » multi-robot common window comprised of information received from each of the plurality of robots.« less

  10. The contaminant analysis automation robot implementation for the automated laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younkin, J.R.; Igou, R.E.; Urenda, T.D.

    1995-12-31

    The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLMmore » when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation.« less

  11. Intelligent behavior generator for autonomous mobile robots using planning-based AI decision making and supervisory control logic

    NASA Astrophysics Data System (ADS)

    Shah, Hitesh K.; Bahl, Vikas; Martin, Jason; Flann, Nicholas S.; Moore, Kevin L.

    2002-07-01

    In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) have been funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). One among the several out growths of this work has been the development of a grammar-based approach to intelligent behavior generation for commanding autonomous robotic vehicles. In this paper we describe the use of this grammar for enabling autonomous behaviors. A supervisory task controller (STC) sequences high-level action commands (taken from the grammar) to be executed by the robot. It takes as input a set of goals and a partial (static) map of the environment and produces, from the grammar, a flexible script (or sequence) of the high-level commands that are to be executed by the robot. The sequence is derived by a planning function that uses a graph-based heuristic search (A* -algorithm). Each action command has specific exit conditions that are evaluated by the STC following each task completion or interruption (in the case of disturbances or new operator requests). Depending on the system's state at task completion or interruption (including updated environmental and robot sensor information), the STC invokes a reactive response. This can include sequencing the pending tasks or initiating a re-planning event, if necessary. Though applicable to a wide variety of autonomous robots, an application of this approach is demonstrated via simulations of ODIS, an omni-directional inspection system developed for security applications.

  12. Kinematic equations for resolved-rate control of an industrial robot arm

    NASA Technical Reports Server (NTRS)

    Barker, L. K.

    1983-01-01

    An operator can use kinematic, resolved-rate equations to dynamically control a robot arm by watching its response to commanded inputs. Known resolved-rate equations for the control of a particular six-degree-of-freedom industrial robot arm and proceeds to simplify the equations for faster computations are derived. Methods for controlling the robot arm in regions which normally cause mathematical singularities in the resolved-rate equations are discussed.

  13. A new scheme of force reflecting control

    NASA Technical Reports Server (NTRS)

    Kim, Won S.

    1992-01-01

    A new scheme of force reflecting control has been developed that incorporates position-error-based force reflection and robot compliance control. The operator is provided with a kinesthetic force feedback which is proportional to the position error between the operator-commanded and the actual position of the robot arm. Robot compliance control, which increases the effective compliance of the robot, is implemented by low pass filtering the outputs of the force/torque sensor mounted on the base of robot hand and using these signals to alter the operator's position command. This position-error-based force reflection scheme combined with shared compliance control has been implemented successfully to the Advanced Teleoperation system consisting of dissimilar master-slave arms. Stability measurements have demonstrated unprecedentedly high force reflection gains of up to 2 or 3, even though the slave arm is much stiffer than operator's hand holding the force reflecting hand controller. Peg-in-hole experiments were performed with eight different operating modes to evaluate the new force-reflecting control scheme. Best task performance resulted with this new control scheme.

  14. Research on wheelchair robot control system based on EOG

    NASA Astrophysics Data System (ADS)

    Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo

    2018-04-01

    The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.

  15. Three degree-of-freedom force feedback control for robotic mating of umbilical lines

    NASA Technical Reports Server (NTRS)

    Fullmer, R. Rees

    1988-01-01

    The use of robotic manipulators for the mating and demating of umbilical fuel lines to the Space Shuttle Vehicle prior to launch is investigated. Force feedback control is necessary to minimize the contact forces which develop during mating. The objective is to develop and demonstrate a working robotic force control system. Initial experimental force control tests with an ASEA IRB-90 industrial robot using the system's Adaptive Control capabilities indicated that control stability would by a primary problem. An investigation of the ASEA system showed a 0.280 second software delay between force input commands and the output of command voltages to the servo system. This computational delay was identified as the primary cause of the instability. Tests on a second path into the ASEA's control computer using the MicroVax II supervisory computer show that time delay would be comparable, offering no stability improvement. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servosystem directly, allowing the robot to use force feedback control while in rigid contact with a moving three-degree-of-freedom target. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servo system directly. This method allowed the robot to use force feedback control while in rigid contact with moving three degree-of-freedom target. Tests on this approach indicated adequate force feedback control even under worst case conditions. A strategy to digitally-controlled vision system was developed. This requires switching between the digital controller when using vision control and the analog controller when using force control, depending on whether or not the mating plates are in contact.

  16. Control of a 7-DOF Robotic Arm System With an SSVEP-Based BCI.

    PubMed

    Chen, Xiaogang; Zhao, Bing; Wang, Yijun; Xu, Shengpu; Gao, Xiaorong

    2018-04-12

    Although robot technology has been successfully used to empower people who suffer from motor disabilities to increase their interaction with their physical environment, it remains a challenge for individuals with severe motor impairment, who do not have the motor control ability to move robots or prosthetic devices by manual control. In this study, to mitigate this issue, a noninvasive brain-computer interface (BCI)-based robotic arm control system using gaze based steady-state visual evoked potential (SSVEP) was designed and implemented using a portable wireless electroencephalogram (EEG) system. A 15-target SSVEP-based BCI using a filter bank canonical correlation analysis (FBCCA) method allowed users to directly control the robotic arm without system calibration. The online results from 12 healthy subjects indicated that a command for the proposed brain-controlled robot system could be selected from 15 possible choices in 4[Formula: see text]s (i.e. 2[Formula: see text]s for visual stimulation and 2[Formula: see text]s for gaze shifting) with an average accuracy of 92.78%, resulting in a 15 commands/min transfer rate. Furthermore, all subjects (even naive users) were able to successfully complete the entire move-grasp-lift task without user training. These results demonstrated an SSVEP-based BCI could provide accurate and efficient high-level control of a robotic arm, showing the feasibility of a BCI-based robotic arm control system for hand-assistance.

  17. Human factors optimization of virtual environment attributes for a space telerobotic control station

    NASA Astrophysics Data System (ADS)

    Lane, Jason Corde

    2000-10-01

    Remote control of underwater vehicles and other robotic systems has, up until now, proved to be a challenging task for the human operator. With technology advancements in computers and displays, computer interfaces can be used to alleviate the workload on the operator. This research introduces the concept of a commanded display, which is a graphical simulation that shows the commands sent to the actual system in real-time. The primary goal of this research was to show a commanded display as an alternative to the traditional predictive display for reducing the effects of time delay. Several experiments were used to investigate how subjects compensated for time delay under a variety of conditions while controlling a 7-degree of freedom robotic manipulator. Results indicate that time delay increased completion time linearly; this linear relationship occurred even at different manipulator speeds, varying levels of error, and when using a commanded display. The commanded display alleviated the majority of time delay effects, up to 91% reduction. The commanded display also facilitated more accurate control, reducing the number of inadvertent impacts to the task worksite, even when compared to no time delay. Even with a moderate error between the commanded and actual displays, the commanded display was still a useful tool for mitigating time delay. The way subjects controlled the manipulator with the input device was tracked and their control strategies were extracted. A correlation between the subjects' use of the input device and their task completion time was determined. The importance of stereo vision and head tracking was examined and shown to improve a subject's depth perception within a virtual environment. Reports of simulator sickness induced by display equipment, including a head mounted display and LCD shutter glasses, were compared. The results of the above testing were used to develop an effective virtual environment control station to control a multi-arm robot.

  18. Using arm and hand gestures to command robots during stealth operations

    NASA Astrophysics Data System (ADS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-06-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-offreedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  19. Using Arm and Hand Gestures to Command Robots during Stealth Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-01-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  20. Bilateral Impedance Control For Telemanipulators

    NASA Technical Reports Server (NTRS)

    Moore, Christopher L.

    1993-01-01

    Telemanipulator system includes master robot manipulated by human operator, and slave robot performing tasks at remote location. Two robots electronically coupled so slave robot moves in response to commands from master robot. Teleoperation greatly enhanced if forces acting on slave robot fed back to operator, giving operator feeling he or she manipulates remote environment directly. Main advantage of bilateral impedance control: enables arbitrary specification of desired performance characteristics for telemanipulator system. Relationship between force and position modulated at both ends of system to suit requirements of task.

  1. Kennedy Space Center, Space Shuttle Processing, and International Space Station Program Overview

    NASA Technical Reports Server (NTRS)

    Higginbotham, Scott Alan

    2011-01-01

    Topics include: International Space Station assembly sequence; Electrical power substation; Thermal control substation; Guidance, navigation and control; Command data and handling; Robotics; Human and robotic integration; Additional modes of re-supply; NASA and International partner control centers; Space Shuttle ground operations.

  2. Coordination of multiple robot arms

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Soloway, D.

    1987-01-01

    Kinematic resolved-rate control from one robot arm is extended to the coordinated control of multiple robot arms in the movement of an object. The structure supports the general movement of one axis system (moving reference frame) with respect to another axis system (control reference frame) by one or more robot arms. The grippers of the robot arms do not have to be parallel or at any pre-disposed positions on the object. For multiarm control, the operator chooses the same moving and control reference frames for each of the robot arms. Consequently, each arm then moves as though it were carrying out the commanded motions by itself.

  3. RACE pulls for shared control

    NASA Astrophysics Data System (ADS)

    Leahy, M. B., Jr.; Cassiday, B. K.

    1993-02-01

    Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. Race is an organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. Small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALC's will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry, we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.

  4. RACE pulls for shared control

    NASA Astrophysics Data System (ADS)

    Leahy, Michael B., Jr.; Cassiday, Brian K.

    1992-11-01

    Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. An organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. The small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALCs will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.

  5. RACE pulls for shared control

    NASA Technical Reports Server (NTRS)

    Leahy, M. B., Jr.; Cassiday, B. K.

    1993-01-01

    Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. Race is an organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. Small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALC's will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry, we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.

  6. Human-Robot Interaction Directed Research Project

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot.

  7. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  8. Kinematic control of robot with degenerate wrist

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Moore, M. C.

    1984-01-01

    Kinematic resolved rate equations allow an operator with visual feedback to dynamically control a robot hand. When the robot wrist is degenerate, the computed joint angle rates exceed operational limits, and unwanted hand movements can result. The generalized matrix inverse solution can also produce unwanted responses. A method is introduced to control the robot hand in the region of the degenerate robot wrist. The method uses a coordinated movement of the first and third joints of the robot wrist to locate the second wrist joint axis for movement of the robot hand in the commanded direction. The method does not entail infinite joint angle rates.

  9. Architecture for Control of the K9 Rover

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Bualat, maria; Fair, Michael; Wright, Anne; Washington, Richard

    2006-01-01

    Software featuring a multilevel architecture is used to control the hardware on the K9 Rover, which is a mobile robot used in research on robots for scientific exploration and autonomous operation in general. The software consists of five types of modules: Device Drivers - These modules, at the lowest level of the architecture, directly control motors, cameras, data buses, and other hardware devices. Resource Managers - Each of these modules controls several device drivers. Resource managers can be commanded by either a remote operator or the pilot or conditional-executive modules described below. Behaviors and Data Processors - These modules perform computations for such functions as planning paths, avoiding obstacles, visual tracking, and stereoscopy. These modules can be commanded only by the pilot. Pilot - The pilot receives a possibly complex command from the remote operator or the conditional executive, then decomposes the command into (1) more-specific commands to the resource managers and (2) requests for information from the behaviors and data processors. Conditional Executive - This highest-level module interprets a command plan sent by the remote operator, determines whether resources required for execution of the plan are available, monitors execution, and, if necessary, selects an alternate branch of the plan.

  10. Multi-Touch Interaction for Robot Command and Control

    DTIC Science & Technology

    2010-12-01

    153 7.3.2 Multi-hand and Multi-finger Gesturing . . . . . . . . . . . 154 7.3.3 Handwriting ...response (real or training exercise), support personnel cannot stop the command staff and say , “We will now have an hour long demonstration of the gesture...not to say that the real-world movement of the robot is without the “problems” of inertia, friction, and other physics, but from the user’s perspective

  11. Little AI: Playing a constructivist robot

    NASA Astrophysics Data System (ADS)

    Georgeon, Olivier L.

    Little AI is a pedagogical game aimed at presenting the founding concepts of constructivist learning and developmental Artificial Intelligence. It primarily targets students in computer science and cognitive science but it can also interest the general public curious about these topics. It requires no particular scientific background; even children can find it entertaining. Professors can use it as a pedagogical resource in class or in online courses. The player presses buttons to control a simulated "baby robot". The player cannot see the robot and its environment, and initially ignores the effects of the commands. The only information received by the player is feedback from the player's commands. The player must learn, at the same time, the functioning of the robot's body and the structure of the environment from patterns in the stream of commands and feedback. We argue that this situation is analogous to how infants engage in early-stage developmental learning (e.g., Piaget (1937), [1]).

  12. Simulation-based intelligent robotic agent for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.

    1990-01-01

    A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.

  13. SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhlemann, I; Graduate School for Computing in Medicine and Life Sciences, University of Luebeck; Jauer, P

    Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety featuresmore » create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking applications, including image quality control and target tracking.« less

  14. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface

    PubMed Central

    Kim, Youngmoo E.

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712

  15. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface.

    PubMed

    Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.

  16. NASA Goddard Space Flight Center Robotic Processing System Program Automation Systems, volume 2

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1991-01-01

    Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form. Some of the areas covered include: (1) mission requirements; (2) automation management system; (3) Space Transportation System (STS) Hitchhicker Payload; (4) Spacecraft Command Language (SCL) scripts; (5) SCL software components; (6) RoMPS EasyLab Command & Variable summary for rack stations and annealer module; (7) support electronics assembly; (8) SCL uplink packet definition; (9) SC-4 EasyLab System Memory Map; (10) Servo Axis Control Logic Suppliers; and (11) annealing oven control subsystem.

  17. Contact Control, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Sternberg, Alex

    The contact control code is a generalized force control scheme meant to interface with a robotic arm being controlled using the Robot Operating System (ROS). The code allows the user to specify a control scheme for each control dimension in a way that many different control task controllers could be built from the same generalized controller. The input to the code includes maximum velocity, maximum force, maximum displacement, and a control law assigned to each direction and the output is a 6 degree of freedom velocity command that is sent to the robot controller.

  18. Knowledge representation system for assembly using robots

    NASA Technical Reports Server (NTRS)

    Jain, A.; Donath, M.

    1987-01-01

    Assembly robots combine the benefits of speed and accuracy with the capability of adaptation to changes in the work environment. However, an impediment to the use of robots is the complexity of the man-machine interface. This interface can be improved by providing a means of using a priori-knowledge and reasoning capabilities for controlling and monitoring the tasks performed by robots. Robots ought to be able to perform complex assembly tasks with the help of only supervisory guidance from human operators. For such supervisory quidance, it is important to express the commands in terms of the effects desired, rather than in terms of the motion the robot must undertake in order to achieve these effects. A suitable knowledge representation can facilitate the conversion of task level descriptions into explicit instructions to the robot. Such a system would use symbolic relationships describing the a priori information about the robot, its environment, and the tasks specified by the operator to generate the commands for the robot.

  19. Controlling multiple security robots in a warehouse environment

    NASA Technical Reports Server (NTRS)

    Everett, H. R.; Gilbreath, G. A.; Heath-Pastore, T. A.; Laird, R. T.

    1994-01-01

    The Naval Command Control and Ocean Surveillance Center (NCCOSC) has developed an architecture to provide coordinated control of multiple autonomous vehicles from a single host console. The multiple robot host architecture (MRHA) is a distributed multiprocessing system that can be expanded to accommodate as many as 32 robots. The initial application will employ eight Cybermotion K2A Navmaster robots configured as remote security platforms in support of the Mobile Detection Assessment and Response System (MDARS) Program. This paper discusses developmental testing of the MRHA in an operational warehouse environment, with two actual and four simulated robotic platforms.

  20. Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment II

    DTIC Science & Technology

    2011-09-01

    for the Soldier, to ensure mission success while maximizing the survivability and lethality through the synergistic interaction of equipment...based touch interface for gloved finger interactions . This interface had to have larger-than-normal touch-screen buttons for commanding the robot...C.; Hill, S.; Pillalamarri, K. Extreme Scalability: Designing Interfaces and Algorithms for Soldier-Robotic Swarm Interaction , Year 2; ARL- TR

  1. Initial experiments in thrusterless locomotion control of a free-flying robot

    NASA Technical Reports Server (NTRS)

    Jasper, W. J.; Cannon, R. H., Jr.

    1990-01-01

    A two-arm free-flying robot has been constructed to study thrusterless locomotion in space. This is accomplished by pushing off or landing on a large structure in a coordinated two-arm maneuver. A new control method, called system momentum control, allows the robot to follow desired momentum trajectories and thus leap or crawl from one structure to another. The robot floats on an air-cushion, simulating in two dimensions the drag-free zero-g environment of space. The control paradigm has been verified experimentally by commanding the robot to push off a bar with both arms, rotate 180 degrees, and catch itself on another bar.

  2. Intelligent lead: a novel HRI sensor for guide robots.

    PubMed

    Cho, Keum-Bae; Lee, Beom-Hee

    2012-01-01

    This paper addresses the introduction of a new Human Robot Interaction (HRI) sensor for guide robots. Guide robots for geriatric patients or the visually impaired should follow user's control command, keeping a certain desired distance allowing the user to work freely. Therefore, it is necessary to acquire control commands and a user's position on a real-time basis. We suggest a new sensor fusion system to achieve this objective and we will call this sensor the "intelligent lead". The objective of the intelligent lead is to acquire a stable distance from the user to the robot, speed-control volume and turn-control volume, even when the robot platform with the intelligent lead is shaken on uneven ground. In this paper we explain a precise Extended Kalman Filter (EKF) procedure for this. The intelligent lead physically consists of a Kinect sensor, the serial linkage attached with eight rotary encoders, and an IMU (Inertial Measurement Unit) and their measurements are fused by the EKF. A mobile robot was designed to test the performance of the proposed sensor system. After installing the intelligent lead in the mobile robot, several tests are conducted to verify that the mobile robot with the intelligent lead is capable of achieving its goal points while maintaining the appropriate distance between the robot and the user. The results show that we can use the intelligent lead proposed in this paper as a new HRI sensor joined a joystick and a distance measure in the mobile environments such as the robot and the user are moving at the same time.

  3. R4SA for Controlling Robots

    NASA Technical Reports Server (NTRS)

    Aghazarian, Hrand

    2009-01-01

    The R4SA GUI mentioned in the immediately preceding article is a userfriendly interface for controlling one or more robot(s). This GUI makes it possible to perform meaningful real-time field experiments and research in robotics at an unmatched level of fidelity, within minutes of setup. It provides such powerful graphing modes as that of a digitizing oscilloscope that displays up to 250 variables at rates between 1 and 200 Hz. This GUI can be configured as multiple intuitive interfaces for acquisition of data, command, and control to enable rapid testing of subsystems or an entire robot system while simultaneously performing analysis of data. The R4SA software establishes an intuitive component-based design environment that can be easily reconfigured for any robotic platform by creating or editing setup configuration files. The R4SA GUI enables event-driven and conditional sequencing similar to those of Mars Exploration Rover (MER) operations. It has been certified as part of the MER ground support equipment and, therefore, is allowed to be utilized in conjunction with MER flight hardware. The R4SA GUI could also be adapted to use in embedded computing systems, other than that of the MER, for commanding and real-time analysis of data.

  4. Control Program for an Optical-Calibration Robot

    NASA Technical Reports Server (NTRS)

    Johnston, Albert

    2005-01-01

    A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.

  5. Experiments in Nonlinear Adaptive Control of Multi-Manipulator, Free-Flying Space Robots

    NASA Technical Reports Server (NTRS)

    Chen, Vincent Wei-Kang

    1992-01-01

    Sophisticated robots can greatly enhance the role of humans in space by relieving astronauts of low level, tedious assembly and maintenance chores and allowing them to concentrate on higher level tasks. Robots and astronauts can work together efficiently, as a team; but the robot must be capable of accomplishing complex operations and yet be easy to use. Multiple cooperating manipulators are essential to dexterity and can broaden greatly the types of activities the robot can achieve; adding adaptive control can ease greatly robot usage by allowing the robot to change its own controller actions, without human intervention, in response to changes in its environment. Previous work in the Aerospace Robotics Laboratory (ARL) have shown the usefulness of a space robot with cooperating manipulators. The research presented in this dissertation extends that work by adding adaptive control. To help achieve this high level of robot sophistication, this research made several advances to the field of nonlinear adaptive control of robotic systems. A nonlinear adaptive control algorithm developed originally for control of robots, but requiring joint positions as inputs, was extended here to handle the much more general case of manipulator endpoint-position commands. A new system modelling technique, called system concatenation was developed to simplify the generation of a system model for complicated systems, such as a free-flying multiple-manipulator robot system. Finally, the task-space concept was introduced wherein the operator's inputs specify only the robot's task. The robot's subsequent autonomous performance of each task still involves, of course, endpoint positions and joint configurations as subsets. The combination of these developments resulted in a new adaptive control framework that is capable of continuously providing full adaptation capability to the complex space-robot system in all modes of operation. The new adaptive control algorithm easily handles free-flying systems with multiple, interacting manipulators, and extends naturally to even larger systems. The new adaptive controller was experimentally demonstrated on an ideal testbed in the ARL-A first-ever experimental model of a multi-manipulator, free-flying space robot that is capable of capturing and manipulating free-floating objects without requiring human assistance. A graphical user interface enhanced the robot usability: it enabled an operator situated at a remote location to issue high-level task description commands to the robot, and to monitor robot activities as it then carried out each assignment autonomously.

  6. High level intelligent control of telerobotics systems

    NASA Technical Reports Server (NTRS)

    Mckee, James

    1988-01-01

    A high level robot command language is proposed for the autonomous mode of an advanced telerobotics system and a predictive display mechanism for the teleoperational model. It is believed that any such system will involve some mixture of these two modes, since, although artificial intelligence can facilitate significant autonomy, a system that can resort to teleoperation will always have the advantage. The high level command language will allow humans to give the robot instructions in a very natural manner. The robot will then analyze these instructions to infer meaning so that is can translate the task into lower level executable primitives. If, however, the robot is unable to perform the task autonomously, it will switch to the teleoperational mode. The time delay between control movement and actual robot movement has always been a problem in teleoperations. The remote operator may not actually see (via a monitor) the results of high actions for several seconds. A computer generated predictive display system is proposed whereby the operator can see a real-time model of the robot's environment and the delayed video picture on the monitor at the same time.

  7. Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Purba, H. A.; Efendi, S.; Fahmi, F.

    2017-03-01

    Fire disasters can occur anytime and result in high losses. It is often that fire fighters cannot access the source of fire due to the damage of building and very high temperature, or even due to the presence of explosive materials. With such constraints and high risk in the handling of the fire, a technological breakthrough that can help fighting the fire is necessary. Our paper proposed the use of robots to extinguish the fire that can be controlled from a specified distance in order to reduce the risk. A fire extinguisher robot was assembled with the intention to extinguish the fire by using a water pump as actuators. The robot movement was controlled using Android smartphones via Wi-fi networks utilizing Wi-fi module contained in the robot. User commands were sent to the microcontroller on the robot and then translated into robotic movement. We used ATMega8 as main microcontroller in the robot. The robot was equipped with cameras and ultrasonic sensors. The camera played role in giving feedback to user and in finding the source of fire. Ultrasonic sensors were used to avoid collisions during movement. Feedback provided by camera on the robot displayed on a screen of smartphone. In lab, testing environment the robot can move following the user command such as turn right, turn left, forward and backward. The ultrasonic sensors worked well that the robot can be stopped at a distance of less than 15 cm. In the fire test, the robot can perform the task properly to extinguish the fire.

  8. 2018 Ground Robotics Capabilities Conference and Exhibiton

    DTIC Science & Technology

    2018-04-11

    Transportable Robot System (MTRS) Inc 1 Non -standard Equipment (approved) Explosive Ordnance Disposal Common Robotic System-Heavy (CRS-H) Inc 1 AROC: 3-Star...and engineering • AI risk mitigation methodologies and techniques are at best immature – E.g., V&V; Probabilistic software analytics; code level...controller to minimize potential UxS mishaps and unauthorized Command and Control (C2). • PSP-10 – Ensure that software systems which exhibit non

  9. A Practical Comparison of Motion Planning Techniques for Robotic Legs in Environments with Obstacles

    NASA Technical Reports Server (NTRS)

    Smith, Tristan B.; Chavez-Clemente, Daniel

    2009-01-01

    ATHLETE is a large six-legged tele-operated robot. Each foot is a wheel; travel can be achieved by walking, rolling, or some combination of the two. Operators control ATHLETE by selecting parameterized commands from a command dictionary. While rolling can be done efficiently, any motion involving steps is cumbersome - each step can require multiple commands and take many minutes to complete. In this paper, we consider four different algorithms that generate a sequence of commands to take a step. We consider a baseline heuristic, a randomized motion planning algorithm, and two variants of A* search. Results for a variety of terrains are presented, and we discuss the quantitative and qualitative tradeoffs between the approaches.

  10. SLAM algorithm applied to robotics assistance for navigation in unknown environments.

    PubMed

    Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo

    2010-02-17

    The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.

  11. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  12. Vision-based stabilization of nonholonomic mobile robots by integrating sliding-mode control and adaptive approach

    NASA Astrophysics Data System (ADS)

    Cao, Zhengcai; Yin, Longjie; Fu, Yili

    2013-01-01

    Vision-based pose stabilization of nonholonomic mobile robots has received extensive attention. At present, most of the solutions of the problem do not take the robot dynamics into account in the controller design, so that these controllers are difficult to realize satisfactory control in practical application. Besides, many of the approaches suffer from the initial speed and torque jump which are not practical in the real world. Considering the kinematics and dynamics, a two-stage visual controller for solving the stabilization problem of a mobile robot is presented, applying the integration of adaptive control, sliding-mode control, and neural dynamics. In the first stage, an adaptive kinematic stabilization controller utilized to generate the command of velocity is developed based on Lyapunov theory. In the second stage, adopting the sliding-mode control approach, a dynamic controller with a variable speed function used to reduce the chattering is designed, which is utilized to generate the command of torque to make the actual velocity of the mobile robot asymptotically reach the desired velocity. Furthermore, to handle the speed and torque jump problems, the neural dynamics model is integrated into the above mentioned controllers. The stability of the proposed control system is analyzed by using Lyapunov theory. Finally, the simulation of the control law is implemented in perturbed case, and the results show that the control scheme can solve the stabilization problem effectively. The proposed control law can solve the speed and torque jump problems, overcome external disturbances, and provide a new solution for the vision-based stabilization of the mobile robot.

  13. Robots, systems, and methods for hazard evaluation and visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.

    A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less

  14. Interactive robot control system and method of use

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Platt, Robert (Inventor); Reiland, Matthew J. (Inventor); Linn, Douglas Martin (Inventor)

    2012-01-01

    A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.

  15. Issues in impedance selection and input devices for multijoint powered orthotics.

    PubMed

    Lemay, M A; Hogan, N; van Dorsten, J W

    1998-03-01

    We investigated the applicability of impedance controllers to robotic orthoses for arm movements. We had tetraplegics turn a crank using their paralyzed arm propelled by a planar robot manipulandum. The robot was under impedance control, and chin motion served as command source. Stiffness varied between 50, 100, or 200 N/m and damping varied between 5 or 15 N/m/s. Results indicated that a low stiffness and high viscosity provided better directional control of the tangential force exerted on the crank.

  16. Control Architecture for Robotic Agent Command and Sensing

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel

    2008-01-01

    Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).

  17. Weintek interfaces for controlling the position of a robotic arm

    NASA Astrophysics Data System (ADS)

    Barz, C.; Ilia, M.; Ilut, T.; Pop-Vadean, A.; Pop, P. P.; Dragan, F.

    2016-08-01

    The paper presents the use of Weintek panels to control the position of a robotic arm, operated step by step on the three motor axes. PLC control interface is designed with a Weintek touch screen. The HMI Weintek eMT3070a is the user interface in the process command of the PLC. This HMI controls the local PLC, entering the coordinate on the axes X, Y and Z. The subject allows the development in a virtual environment for e-learning and monitoring the robotic arm actions.

  18. Bio-robots automatic navigation with electrical reward stimulation.

    PubMed

    Sun, Chao; Zhang, Xinlu; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2012-01-01

    Bio-robots that controlled by outer stimulation through brain computer interface (BCI) suffer from the dependence on realtime guidance of human operators. Current automatic navigation methods for bio-robots focus on the controlling rules to force animals to obey man-made commands, with animals' intelligence ignored. This paper proposes a new method to realize the automatic navigation for bio-robots with electrical micro-stimulation as real-time rewards. Due to the reward-seeking instinct and trial-and-error capability, bio-robot can be steered to keep walking along the right route with rewards and correct its direction spontaneously when rewards are deprived. In navigation experiments, rat-robots learn the controlling methods in short time. The results show that our method simplifies the controlling logic and realizes the automatic navigation for rat-robots successfully. Our work might have significant implication for the further development of bio-robots with hybrid intelligence.

  19. Monitoring and Controlling an Underwater Robotic Arm

    NASA Technical Reports Server (NTRS)

    Haas, John; Todd, Brian Keith; Woodcock, Larry; Robinson, Fred M.

    2009-01-01

    The SSRMS Module 1 software is part of a system for monitoring an adaptive, closed-loop control of the motions of a robotic arm in NASA s Neutral Buoyancy Laboratory, where buoyancy in a pool of water is used to simulate the weightlessness of outer space. This software is so named because the robot arm is a replica of the Space Shuttle Remote Manipulator System (SSRMS). This software is distributed, running on remote joint processors (RJPs), each of which is mounted in a hydraulic actuator comprising the joint of the robotic arm and communicating with a poolside processor denoted the Direct Control Rack (DCR). Each RJP executes the feedback joint-motion control algorithm for its joint and communicates with the DCR. The DCR receives joint-angular-velocity commands either locally from an operator or remotely from computers that simulate the flight like SSRMS and perform coordinated motion calculations based on hand-controller inputs. The received commands are checked for validity before they are transmitted to the RJPs. The DCR software generates a display of the statuses of the RJPs for the DCR operator and can shut down the hydraulic pump when excessive joint-angle error or failure of a RJP is detected.

  20. Forming Human-Robot Teams Across Time and Space

    NASA Technical Reports Server (NTRS)

    Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.

    2012-01-01

    NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot, so it involves both internal fault detection as well as force sensing for unintended external contacts. The designs for the supervisory command mode and the redundant safety system will be described. Specific implementations were developed and test results will be reported. Experiments were conducted using terrestrial analogs for deep space missions, where time delays were artificially added to emulate the longer distances found in space.

  1. Improved CLARAty Functional-Layer/Decision-Layer Interface

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Rabideau, Gregg; Gaines, Daniel; Johnston, Mark; Chouinard, Caroline; Nessnas, Issa; Shu, I-Hsiang

    2008-01-01

    Improved interface software for communication between the CLARAty Decision and Functional layers has been developed. [The Coupled Layer Architecture for Robotics Autonomy (CLARAty) was described in Coupled-Layer Robotics Architecture for Autonomy (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: the CLARAty architecture was developed to improve the modularity of robotic software while tightening coupling between planning/execution and basic control subsystems. Whereas prior robotic software architectures typically contained three layers, the CLARAty contains two layers: a decision layer (DL) and a functional layer (FL).] Types of communication supported by the present software include sending commands from DL modules to FL modules and sending data updates from FL modules to DL modules. The present software supplants prior interface software that had little error-checking capability, supported data parameters in string form only, supported commanding at only one level of the FL, and supported only limited updates of the state of the robot. The present software offers strong error checking, and supports complex data structures and commanding at multiple levels of the FL, and relative to the prior software, offers a much wider spectrum of state-update capabilities.

  2. Command and Telemetry Latency Effects on Operator Performance during International Space Station Robotics Operations

    NASA Technical Reports Server (NTRS)

    Currie, Nancy J.; Rochlis, Jennifer

    2004-01-01

    International Space Station (ISS) operations will require the on-board crew to perform numerous robotic-assisted assembly, maintenance, and inspection activities. Current estimates for some robotically performed maintenance timelines are disproportionate and potentially exceed crew availability and duty times. Ground-based control of the ISS robotic manipulators, specifically the Special Purpose Dexterous Manipulator (SPDM), is being examined as one potential solution to alleviate the excessive amounts of crew time required for extravehicular robotic maintenance and inspection tasks.

  3. A two-class self-paced BCI to control a robot in four directions.

    PubMed

    Ron-Angevin, Ricardo; Velasco-Alvarez, Francisco; Sancha-Ros, Salvador; da Silva-Sauer, Leandro

    2011-01-01

    In this work, an electroencephalographic analysis-based, self-paced (asynchronous) brain-computer interface (BCI) is proposed to control a mobile robot using four different navigation commands: turn right, turn left, move forward and move back. In order to reduce the probability of misclassification, the BCI is to be controlled with only two mental tasks (relaxed state versus imagination of right hand movements), using an audio-cued interface. Four healthy subjects participated in the experiment. After two sessions controlling a simulated robot in a virtual environment (which allowed the user to become familiar with the interface), three subjects successfully moved the robot in a real environment. The obtained results show that the proposed interface enables control over the robot, even for subjects with low BCI performance. © 2011 IEEE

  4. Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks

    NASA Technical Reports Server (NTRS)

    Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia

    2017-01-01

    Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.

  5. Brain-controlled telepresence robot by motor-disabled people.

    PubMed

    Tonin, Luca; Carlson, Tom; Leeb, Robert; del R Millán, José

    2011-01-01

    In this paper we present the first results of users with disabilities in mentally controlling a telepresence robot, a rather complex task as the robot is continuously moving and the user must control it for a long period of time (over 6 minutes) to go along the whole path. These two users drove the telepresence robot from their clinic more than 100 km away. Remarkably, although the patients had never visited the location where the telepresence robot was operating, they achieve similar performances to a group of four healthy users who were familiar with the environment. In particular, the experimental results reported in this paper demonstrate the benefits of shared control for brain-controlled telepresence robots. It allows all subjects (including novel BMI subjects as our users with disabilities) to complete a complex task in similar time and with similar number of commands to those required by manual control.

  6. Workspace Safe Operation of a Force- or Impedance-Controlled Robot

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Strawser, Philip A. (Inventor); Yamokoski, John D. (Inventor)

    2013-01-01

    A method of controlling a robotic manipulator of a force- or impedance-controlled robot within an unstructured workspace includes imposing a saturation limit on a static force applied by the manipulator to its surrounding environment, and may include determining a contact force between the manipulator and an object in the unstructured workspace, and executing a dynamic reflex when the contact force exceeds a threshold to thereby alleviate an inertial impulse not addressed by the saturation limited static force. The method may include calculating a required reflex torque to be imparted by a joint actuator to a robotic joint. A robotic system includes a robotic manipulator having an unstructured workspace and a controller that is electrically connected to the manipulator, and which controls the manipulator using force- or impedance-based commands. The controller, which is also disclosed herein, automatically imposes the saturation limit and may execute the dynamic reflex noted above.

  7. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    PubMed Central

    2010-01-01

    Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735

  8. Naval Sea Systems Command > Home

    Science.gov Websites

    Parties Vehicles for Partnering STEM Programs FIRST LEGO League Robotics Program Carderock Math Contest Educational Partnership Agreements Math Clubs Seaplane Challenge Calculator-Controlled Robot Program Students - 'Fun Twist on Math' May 24, 2018 More SOCIAL MEDIA Facebook Logo Join us live as we commission

  9. Off-line programming motion and process commands for robotic welding of Space Shuttle main engines

    NASA Technical Reports Server (NTRS)

    Ruokangas, C. C.; Guthmiller, W. A.; Pierson, B. L.; Sliwinski, K. E.; Lee, J. M. F.

    1987-01-01

    The off-line-programming software and hardware being developed for robotic welding of the Space Shuttle main engine are described and illustrated with diagrams, drawings, graphs, and photographs. The menu-driven workstation-based interactive programming system is designed to permit generation of both motion and process commands for the robotic workcell by weld engineers (with only limited knowledge of programming or CAD systems) on the production floor. Consideration is given to the user interface, geometric-sources interfaces, overall menu structure, weld-parameter data base, and displays of run time and archived data. Ongoing efforts to address limitations related to automatic-downhand-configuration coordinated motion, a lack of source codes for the motion-control software, CAD data incompatibility, interfacing with the robotic workcell, and definition of the welding data base are discussed.

  10. Neural-Network Control Of Prosthetic And Robotic Hands

    NASA Technical Reports Server (NTRS)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  11. Development and Command-Control Tools for Many-Robot Systems

    DTIC Science & Technology

    2005-01-01

    been components such as pressure sensors and accelerometers for the automobile market. In fact, robots of any size have yet to appear in our daily...34 mode, so that the target hardware is neither reprogrammable nor rechargable. The goal of this paper is to propose some generic tools that the

  12. A novel Morse code-inspired method for multiclass motor imagery brain-computer interface (BCI) design.

    PubMed

    Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen

    2015-11-01

    Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A Human Machine Interface for EVA

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    EVA astronauts work in a challenging environment that includes high rate of muscle fatigue, haptic and proprioception impairment, lack of dexterity and interaction with robotic equipment. Currently they are heavily dependent on support from on-board crew and ground station staff for information and robotics operation. They are limited to the operation of simple controls on the suit exterior and external robot controls that are difficult to operate because of the heavy gloves that are part of the EVA suit. A wearable human machine interface (HMI) inside the suit provides a powerful alternative for robot teleoperation, procedure checklist access, generic equipment operation via virtual control panels and general information retrieval and presentation. The HMI proposed here includes speech input and output, a simple 6 degree of freedom (dof) pointing device and a heads up display (HUD). The essential characteristic of this interface is that it offers an alternative to the standard keyboard and mouse interface of a desktop computer. The astronaut's speech is used as input to command mode changes, execute arbitrary computer commands and generate text. The HMI can respond with speech also in order to confirm selections, provide status and feedback and present text output. A candidate 6 dof pointing device is Measurand's Shapetape, a flexible "tape" substrate to which is attached an optic fiber with embedded sensors. Measurement of the modulation of the light passing through the fiber can be used to compute the shape of the tape and, in particular, the position and orientation of the end of the Shapetape. It can be used to provide any kind of 3d geometric information including robot teleoperation control. The HUD can overlay graphical information onto the astronaut's visual field including robot joint torques, end effector configuration, procedure checklists and virtual control panels. With suitable tracking information about the position and orientation of the EVA suit, the overlaid graphical information can be registered with the external world. For example, information about an object can be positioned on or beside the object. This wearable HMI supports many applications during EVA including robot teleoperation, procedure checklist usage, operation of virtual control panels and general information or documentation retrieval and presentation. Whether the robot end effector is a mobile platform for the EVA astronaut or is an assistant to the astronaut in an assembly or repair task, the astronaut can control the robot via a direct manipulation interface. Embedded in the suit or the astronaut's clothing, Shapetape can measure the user's arm/hand position and orientation which can be directly mapped into the workspace coordinate system of the robot. Motion of the users hand can generate corresponding motion of the robot end effector in order to reposition the EVA platform or to manipulate objects in the robot's grasp. Speech input can be used to execute commands and mode changes without the astronaut having to withdraw from the teleoperation task. Speech output from the system can provide feedback without affecting the user's visual attention. The procedure checklist guiding the astronaut's detailed activities can be presented on the HUD and manipulated (e.g., move, scale, annotate, mark tasks as done, consult prerequisite tasks) by spoken command. Virtual control panels for suit equipment, equipment being repaired or arbitrary equipment on the space station can be displayed on the HUD and can be operated by speech commands or by hand gestures. For example, an antenna being repaired could be pointed under the control of the EVA astronaut. Additionally arbitrary computer activities such as information retrieval and presentation can be carried out using similar interface techniques. Considering the risks, expense and physical challenges of EVA work, it is appropriate that EVA astronauts have considerable support from station crew and ground station staff. Reducing their dependence on such personnel may under many circumstances, however, improve performance and reduce risk. For example, the EVA astronaut is likely to have the best viewpoint at a robotic worksite. Direct access to the procedure checklist can help provide temporal context and continuity throughout an EVA. Access to station facilities through an HMI such as the one described here could be invaluable during an emergency or in a situation in which a fault occurs. The full paper will describe the HMI operation and applications in the EVA context in more detail and will describe current laboratory prototyping activities.

  14. Controlling Herds of Cooperative Robots

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.

    2006-01-01

    A document poses, and suggests a program of research for answering, questions of how to achieve autonomous operation of herds of cooperative robots to be used in exploration and/or colonization of remote planets. In a typical scenario, a flock of mobile sensory robots would be deployed in a previously unexplored region, one of the robots would be designated the leader, and the leader would issue commands to move the robots to different locations or aim sensors at different targets to maximize scientific return. It would be necessary to provide for this hierarchical, cooperative behavior even in the face of such unpredictable factors as terrain obstacles. A potential-fields approach is proposed as a theoretical basis for developing methods of autonomous command and guidance of a herd. A survival-of-the-fittest approach is suggested as a theoretical basis for selection, mutation, and adaptation of a description of (1) the body, joints, sensors, actuators, and control computer of each robot, and (2) the connectivity of each robot with the rest of the herd, such that the herd could be regarded as consisting of a set of artificial creatures that evolve to adapt to a previously unknown environment. A distributed simulation environment has been developed to test the proposed approaches in the Titan environment. One blimp guides three surface sondes via a potential field approach. The results of the simulation demonstrate that the method used for control is feasible, even if significant uncertainty exists in the dynamics and environmental models, and that the control architecture provides the autonomy needed to enable surface science data collection.

  15. Designing minimal space telerobotics systems for maximum performance

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Long, Mark K.; Steele, Robert D.

    1992-01-01

    The design of the remote site of a local-remote telerobot control system is described which addresses the constraints of limited computational power available at the remote site control system while providing a large range of control capabilities. The Modular Telerobot Task Execution System (MOTES) provides supervised autonomous control, shared control and teleoperation for a redundant manipulator. The system is capable of nominal task execution as well as monitoring and reflex motion. The MOTES system is minimized while providing a large capability by limiting its functionality to only that which is necessary at the remote site and by utilizing a unified multi-sensor based impedance control scheme. A command interpreter similar to one used on robotic spacecraft is used to interpret commands received from the local site. The system is written in Ada and runs in a VME environment on 68020 processors and initially controls a Robotics Research K1207 7 degree of freedom manipulator.

  16. Automation Improvements for Synchrotron Based Small Angle Scattering Using an Inexpensive Robotics Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintana, John P.

    This paper reports on the progress toward creating semi-autonomous motion control platforms for beamline applications using the iRobot Create registered platform. The goal is to create beamline research instrumentation where the motion paths are based on the local environment rather than position commanded from a control system, have low integration costs and also be scalable and easily maintainable.

  17. Envisioning Cognitive Robots for Future Space Exploration

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Stoica, Adrian

    2010-01-01

    Cognitive robots in the context of space exploration are envisioned with advanced capabilities of model building, continuous planning/re-planning, self-diagnosis, as well as the ability to exhibit a level of 'understanding' of new situations. An overview of some JPL components (e.g. CASPER, CAMPOUT) and a description of the architecture CARACaS (Control Architecture for Robotic Agent Command and Sensing) that combines these in the context of a cognitive robotic system operating in a various scenarios are presented. Finally, two examples of typical scenarios of a multi-robot construction mission and a human-robot mission, involving direct collaboration with humans is given.

  18. Autonomous stair-climbing with miniature jumping robots.

    PubMed

    Stoeter, Sascha A; Papanikolopoulos, Nikolaos

    2005-04-01

    The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.

  19. The Canonical Robot Command Language (CRCL).

    PubMed

    Proctor, Frederick M; Balakirsky, Stephen B; Kootbally, Zeid; Kramer, Thomas R; Schlenoff, Craig I; Shackleford, William P

    2016-01-01

    Industrial robots can perform motion with sub-millimeter repeatability when programmed using the teach-and-playback method. While effective, this method requires significant up-front time, tying up the robot and a person during the teaching phase. Off-line programming can be used to generate robot programs, but the accuracy of this method is poor unless supplemented with good calibration to remove systematic errors, feed-forward models to anticipate robot response to loads, and sensing to compensate for unmodeled errors. These increase the complexity and up-front cost of the system, but the payback in the reduction of recurring teach programming time can be worth the effort. This payback especially benefits small-batch, short-turnaround applications typical of small-to-medium enterprises, who need the agility afforded by off-line application development to be competitive against low-cost manual labor. To fully benefit from this agile application tasking model, a common representation of tasks should be used that is understood by all of the resources required for the job: robots, tooling, sensors, and people. This paper describes an information model, the Canonical Robot Command Language (CRCL), which provides a high-level description of robot tasks and associated control and status information.

  20. The Canonical Robot Command Language (CRCL)

    PubMed Central

    Proctor, Frederick M.; Balakirsky, Stephen B.; Kootbally, Zeid; Kramer, Thomas R.; Schlenoff, Craig I.; Shackleford, William P.

    2017-01-01

    Industrial robots can perform motion with sub-millimeter repeatability when programmed using the teach-and-playback method. While effective, this method requires significant up-front time, tying up the robot and a person during the teaching phase. Off-line programming can be used to generate robot programs, but the accuracy of this method is poor unless supplemented with good calibration to remove systematic errors, feed-forward models to anticipate robot response to loads, and sensing to compensate for unmodeled errors. These increase the complexity and up-front cost of the system, but the payback in the reduction of recurring teach programming time can be worth the effort. This payback especially benefits small-batch, short-turnaround applications typical of small-to-medium enterprises, who need the agility afforded by off-line application development to be competitive against low-cost manual labor. To fully benefit from this agile application tasking model, a common representation of tasks should be used that is understood by all of the resources required for the job: robots, tooling, sensors, and people. This paper describes an information model, the Canonical Robot Command Language (CRCL), which provides a high-level description of robot tasks and associated control and status information. PMID:28529393

  1. Paralyzed subject controls telepresence mobile robot using novel sEMG brain-computer interface: case study.

    PubMed

    Lyons, Kenneth R; Joshi, Sanjay S

    2013-06-01

    Here we demonstrate the use of a new singlesignal surface electromyography (sEMG) brain-computer interface (BCI) to control a mobile robot in a remote location. Previous work on this BCI has shown that users are able to perform cursor-to-target tasks in two-dimensional space using only a single sEMG signal by continuously modulating the signal power in two frequency bands. Using the cursor-to-target paradigm, targets are shown on the screen of a tablet computer so that the user can select them, commanding the robot to move in different directions for a fixed distance/angle. A Wifi-enabled camera transmits video from the robot's perspective, giving the user feedback about robot motion. Current results show a case study with a C3-C4 spinal cord injury (SCI) subject using a single auricularis posterior muscle site to navigate a simple obstacle course. Performance metrics for operation of the BCI as well as completion of the telerobotic command task are developed. It is anticipated that this noninvasive and mobile system will open communication opportunities for the severely paralyzed, possibly using only a single sensor.

  2. Should We Turn the Robots Loose?

    DTIC Science & Technology

    2010-05-02

    interference. Potential sources of electromagnetic interference include everyday signals such as cell phones and Wifi , intentional friendly jamming of IED...might even attempt to hack or hijack our robotic warriors. Our current enemies have proven to be very adaptable and have developed simple counters to our...demonstrates the ease with which robot command and control might be hacked . It is reasonable to suspect that a future threat with a more robust

  3. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  4. Computer coordination of limb motion for a three-legged walking robot

    NASA Technical Reports Server (NTRS)

    Klein, C. A.; Patterson, M. R.

    1980-01-01

    Coordination of the limb motion of a vehicle which could perform assembly and maintenance operations on large structures in space is described. Manipulator kinematics and walking robots are described. The basic control scheme of the robot is described. The control of the individual arms are described. Arm velocities are generally described in Cartesian coordinates. Cartesian velocities are converted to joint velocities using the Jacobian matrix. The calculation of a trajectory for an arm given a sequence of points through which it is to pass is described. The free gait algorithm which controls the lifting and placing of legs for the robot is described. The generation of commanded velocities for the robot, and the implementation of those velocities by the algorithm are discussed. Suggestions for further work in the area of robot legged locomotion are presented.

  5. ATHLETE's Feet: Mu1ti-Resolution Planning for a Hexapod Robot

    NASA Technical Reports Server (NTRS)

    Smith, Tristan B.; Barreiro, Javier; Smith, David E.; SunSpiral, Vytas; Chavez-Clemente, Daniel

    2008-01-01

    ATHLETE is a large six-legged tele-operated robot. Each foot is a wheel; travel can be achieved by walking, rolling, or some combination of the two. Operators control ATHLETE by selecting parameterized commands from a command dictionary. While rolling can be done efficiently with a single command, any motion involving steps is cumbersome - walking a few meters through difficult terrain can take hours. Our goal is to improve operator efficiency by automatically generating sequences of motion commands. There is increasing uncertainty regarding ATHLETE s actual configuration over time and decreasing quality of terrain data farther away from the current position. This, combined with the complexity that results from 36 degrees of kinematic freedom, led to an architecture that interleaves planning and execution at multiple levels, ranging from traditional configuration space motion planning algorithms for immediate moves to higher level task and path planning algorithms for overall travel. The modularity of the architecture also simplifies the development process and allows the operator to interact with and control the system at varying levels of autonomy depending on terrain and need.

  6. A Generalized-Compliant-Motion Primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1993-01-01

    Computer program bridges gap between planning and execution of compliant robotic motions developed and installed in control system of telerobot. Called "generalized-compliant-motion primitive," one of several task-execution-primitive computer programs, which receives commands from higher-level task-planning programs and executes commands by generating required trajectories and applying appropriate control laws. Program comprises four parts corresponding to nominal motion, compliant motion, ending motion, and monitoring. Written in C language.

  7. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    NASA Astrophysics Data System (ADS)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  8. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    PubMed

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. A motion sensing-based framework for robotic manipulation.

    PubMed

    Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing

    2016-01-01

    To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.

  10. Cooperative Three-Robot System for Traversing Steep Slopes

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terrance; Aghazarian, Hrand; Younse, Paulo; Garrett, Michael

    2009-01-01

    Teamed Robots for Exploration and Science in Steep Areas (TRESSA) is a system of three autonomous mobile robots that cooperate with each other to enable scientific exploration of steep terrain (slope angles up to 90 ). Originally intended for use in exploring steep slopes on Mars that are not accessible to lone wheeled robots (Mars Exploration Rovers), TRESSA and systems like TRESSA could also be used on Earth for performing rescues on steep slopes and for exploring steep slopes that are too remote or too dangerous to be explored by humans. TRESSA is modeled on safe human climbing of steep slopes, two key features of which are teamwork and safety tethers. Two of the autonomous robots, denoted Anchorbots, remain at the top of a slope; the third robot, denoted the Cliffbot, traverses the slope. The Cliffbot drives over the cliff edge supported by tethers, which are payed out from the Anchorbots (see figure). The Anchorbots autonomously control the tension in the tethers to counter the gravitational force on the Cliffbot. The tethers are payed out and reeled in as needed, keeping the body of the Cliffbot oriented approximately parallel to the local terrain surface and preventing wheel slip by controlling the speed of descent or ascent, thereby enabling the Cliffbot to drive freely up, down, or across the slope. Due to the interactive nature of the three-robot system, the robots must be very tightly coupled. To provide for this tight coupling, the TRESSA software architecture is built on a combination of (1) the multi-robot layered behavior-coordination architecture reported in "An Architecture for Controlling Multiple Robots" (NPO-30345), NASA Tech Briefs, Vol. 28, No. 10 (October 2004), page 65, and (2) the real-time control architecture reported in "Robot Electronics Architecture" (NPO-41784), NASA Tech Briefs, Vol. 32, No. 1 (January 2008), page 28. The combination architecture makes it possible to keep the three robots synchronized and coordinated, to use data from all three robots for decision- making at each step, and to control the physical connections among the robots. In addition, TRESSA (as in prior systems that have utilized this architecture) , incorporates a capability for deterministic response to unanticipated situations from yet another architecture reported in Control Architecture for Robotic Agent Command and Sensing (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40. Tether tension control is a major consideration in the design and operation of TRESSA. Tension is measured by force sensors connected to each tether at the Cliffbot. The direction of the tension (both azimuth and elevation) is also measured. The tension controller combines a controller to counter gravitational force and an optional velocity controller that anticipates the motion of the Cliffbot. The gravity controller estimates the slope angle from the inclination of the tethers. This angle and the weight of the Cliffbot determine the total tension needed to counteract the weight of the Cliffbot. The total needed tension is broken into components for each Anchorbot. The difference between this needed tension and the tension measured at the Cliffbot constitutes an error signal that is provided to the gravity controller. The velocity controller computes the tether speed needed to produce the desired motion of the Cliffbot. Another major consideration in the design and operation of TRESSA is detection of faults. Each robot in the TRESSA system monitors its own performance and the performance of its teammates in order to detect any system faults and prevent unsafe conditions. At startup, communication links are tested and if any robot is not communicating, the system refuses to execute any motion commands. Prior to motion, the Anchorbots attempt to set tensions in the tethers at optimal levels for counteracting the weight of the Cliffbot; if either Anchorbot fails to reach its optimal tension level within a specified time, it sends message to the other robots and the commanded motion is not executed. If any mechanical error (e.g., stalling of a motor) is detected, the affected robot sends a message triggering stoppage of the current motion. Lastly, messages are passed among the robots at each time step (10 Hz) to share sensor information during operations. If messages from any robot cease for more than an allowable time interval, the other robots detect the communication loss and initiate stoppage.

  11. Soft Pushing Operation with Dual Compliance Controllers Based on Estimated Torque and Visual Force

    NASA Astrophysics Data System (ADS)

    Muis, Abdul; Ohnishi, Kouhei

    Sensor fusion extends robot ability to perform more complex tasks. An interesting application in such an issue is pushing operation, in which through multi-sensor, the robot moves an object by pushing it. Generally, a pushing operation consists of “approaching, touching, and pushing"(1). However, most researches in this field are dealing with how the pushed object follows the predefined trajectory. In which, the implication as the robot body or the tool-tip hits an object is neglected. Obviously on collision, the robot momentum may crash sensor, robot's surface or even the object. For that reason, this paper proposes a soft pushing operation with dual compliance controllers. Mainly, a compliance control is a control system with trajectory compensation so that the external force may be followed. In this paper, the first compliance controller is driven by estimated external force based on reaction torque observer(2), which compensates contact sensation. The other one compensates non-contact sensation. Obviously, a contact sensation, acquired from force sensor either reaction torque observer of an object, is measurable once the robot touched the object. Therefore, a non-contact sensation is introduced before touching an object, which is realized with visual sensor in this paper. Here, instead of using visual information as command reference, the visual information such as depth, is treated as virtual force for the second compliance controller. Thus, having contact and non-contact sensation, the robot will be compliant with wider sensation. This paper considers a heavy mobile manipulator and a heavy object, which have significant momentum on touching stage. A chopstick is attached on the object side to show the effectiveness of the proposed method. Here, both compliance controllers adjust the mobile manipulator command reference to provide soft pushing operation. Finally, the experimental result shows the validity of the proposed method.

  12. AERCam Autonomy: Intelligent Software Architecture for Robotic Free Flying Nanosatellite Inspection Vehicles

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.; Duran, Steve G.; Braun, Angela N.; Straube, Timothy M.; Mitchell, Jennifer D.

    2006-01-01

    The NASA Johnson Space Center has developed a nanosatellite-class Free Flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam Free Flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35-pound, 14-inch diameter AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, power, propulsion, and imaging subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations, including automatic stationkeeping, point-to-point maneuvering, and waypoint tracking. The Mini AERCam Free Flyer is accompanied by a sophisticated control station for command and control, as well as a docking system for automated deployment, docking, and recharge at a parent spacecraft. Free Flyer functional testing has been conducted successfully on both an airbearing table and in a six-degree-of-freedom closed-loop orbital simulation with avionics hardware in the loop. Mini AERCam aims to provide beneficial on-orbit views that cannot be obtained from fixed cameras, cameras on robotic manipulators, or cameras carried by crewmembers during extravehicular activities (EVA s). On Shuttle or International Space Station (ISS), for example, Mini AERCam could support external robotic operations by supplying orthogonal views to the intravehicular activity (IVA) robotic operator, supply views of EVA operations to IVA and/or ground crews monitoring the EVA, and carry out independent visual inspections of areas of interest around the spacecraft. To enable these future benefits with minimal impact on IVA operators and ground controllers, the Mini AERCam system architecture incorporates intelligent systems attributes that support various autonomous capabilities. 1) A robust command sequencer enables task-level command scripting. Command scripting is employed for operations such as automatic inspection scans over a region of interest, and operator-hands-off automated docking. 2) A system manager built on the same expert-system software as the command sequencer provides detection and smart-response capability for potential system-level anomalies, like loss of communications between the Free Flyer and control station. 3) An AERCam dynamics manager provides nominal and off-nominal management of guidance, navigation, and control (GN&C) functions. It is employed for safe trajectory monitoring, contingency maneuvering, and related roles. This paper will describe these architectural components of Mini AERCam autonomy, as well as the interaction of these elements with a human operator during supervised autonomous control.

  13. Design and Experimental Validation of a Simple Controller for a Multi-Segment Magnetic Crawler Robot

    DTIC Science & Technology

    2015-04-01

    Ave, Cambridge, MA USA 02139; bSpace and Naval Warfare (SPAWAR) Systems Center Pacific, San Diego, CA USA 92152 ABSTRACT A novel, multi-segmented...high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the...to be demonstrated.14 The Unmanned Systems Group at SPAWAR Systems Center Pacific has developed a multi-segment magnetic crawler robot (MSMR

  14. Object-based task-level control: A hierarchical control architecture for remote operation of space robots

    NASA Technical Reports Server (NTRS)

    Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.

    1994-01-01

    Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.

  15. Selfie in Cupola module

    NASA Image and Video Library

    2015-05-24

    ISS043E241729 (05/24/2015) --- Expedition 43 commander and NASA astronaut Terry Virts is seen here inside of the station’s Cupola module. The Cupola is designed for the observation of operations outside the ISS such as robotic activities, the approach of vehicles, and spacewalks. It also provides spectacular views of Earth and celestial objects for use in astronaut observation experiments. It houses the robotic workstation that controls the space station’s robotic arm and can accommodate two crewmembers simultaneously.

  16. Design and Implementation of a Brain Computer Interface System for Controlling a Robotic Claw

    NASA Astrophysics Data System (ADS)

    Angelakis, D.; Zoumis, S.; Asvestas, P.

    2017-11-01

    The aim of this paper is to present the design and implementation of a brain-computer interface (BCI) system that can control a robotic claw. The system is based on the Emotiv Epoc headset, which provides the capability of simultaneous recording of 14 EEG channels, as well as wireless connectivity by means of the Bluetooth protocol. The system is initially trained to decode what user thinks to properly formatted data. The headset communicates with a personal computer, which runs a dedicated software application, implemented under the Processing integrated development environment. The application acquires the data from the headset and invokes suitable commands to an Arduino Uno board. The board decodes the received commands and produces corresponding signals to a servo motor that controls the position of the robotic claw. The system was tested successfully on a healthy, male subject, aged 28 years. The results are promising, taking into account that no specialized hardware was used. However, tests on a larger number of users is necessary in order to draw solid conclusions regarding the performance of the proposed system.

  17. Apparatus and method for modifying the operation of a robotic vehicle in a real environment, to emulate the operation of the robotic vehicle operating in a mixed reality environment

    DOEpatents

    Garretson, Justin R [Albuquerque, NM; Parker, Eric P [Albuquerque, NM; Gladwell, T Scott [Albuquerque, NM; Rigdon, J Brian [Edgewood, NM; Oppel, III, Fred J.

    2012-05-29

    Apparatus and methods for modifying the operation of a robotic vehicle in a real environment to emulate the operation of the robotic vehicle in a mixed reality environment include a vehicle sensing system having a communications module attached to the robotic vehicle for communicating operating parameters related to the robotic vehicle in a real environment to a simulation controller for simulating the operation of the robotic vehicle in a mixed (live, virtual and constructive) environment wherein the affects of virtual and constructive entities on the operation of the robotic vehicle (and vice versa) are simulated. These effects are communicated to the vehicle sensing system which generates a modified control command for the robotic vehicle including the effects of virtual and constructive entities, causing the robot in the real environment to behave as if virtual and constructive entities existed in the real environment.

  18. An extension of command shaping methods for controlling residual vibration using frequency sampling

    NASA Technical Reports Server (NTRS)

    Singer, Neil C.; Seering, Warren P.

    1992-01-01

    The authors present an extension to the impulse shaping technique for commanding machines to move with reduced residual vibration. The extension, called frequency sampling, is a method for generating constraints that are used to obtain shaping sequences which minimize residual vibration in systems such as robots whose resonant frequencies change during motion. The authors present a review of impulse shaping methods, a development of the proposed extension, and a comparison of results of tests conducted on a simple model of the space shuttle robot arm. Frequency shaping provides a method for minimizing the impulse sequence duration required to give the desired insensitivity.

  19. Experiments in thrusterless robot locomotion control for space applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Jasper, Warren Joseph

    1990-01-01

    While performing complex assembly tasks or moving about in space, a space robot should minimize the amount of propellant consumed. A study is presented of space robot locomotion and orientation without the use of thrusters. The goal was to design a robot control paradigm that will perform thrusterless locomotion between two points on a structure, and to implement this paradigm on an experimental robot. A two arm free flying robot was constructed which floats on a cushion of air to simulate in 2-D the drag free, zero-g environment of space. The robot can impart momentum to itself by pushing off from an external structure in a coordinated two arm maneuver, and can then reorient itself by activating a momentum wheel. The controller design consists of two parts: a high level strategic controller and a low level dynamic controller. The control paradigm was verified experimentally by commanding the robot to push off from a structure with both arms, rotate 180 degs while translating freely, and then to catch itself on another structure. This method, based on the computed torque, provides a linear feedback law in momentum and its derivatives for a system of rigid bodies.

  20. Teleoperated position control of a PUMA robot

    NASA Technical Reports Server (NTRS)

    Austin, Edmund; Fong, Chung P.

    1987-01-01

    A laboratory distributed computer control teleoperator system is developed to support NASA's future space telerobotic operation. This teleoperator system uses a universal force-reflecting hand controller in the local iste as the operator's input device. In the remote site, a PUMA controller recieves the Cartesian position commands and implements PID control laws to position the PUMA robot. The local site uses two microprocessors while the remote site uses three. The processors communicate with each other through shared memory. The PUMA robot controller was interfaced through custom made electronics to bypass VAL. The development status of this teleoperator system is reported. The execution time of each processor is analyzed, and the overall system throughput rate is reported. Methods to improve the efficiency and performance are discussed.

  1. Planning and Teaching Compliant Motion Strategies.

    DTIC Science & Technology

    1987-01-01

    commanded motion. The black polyhedron shown in the figure contains a set of commanded positions. The robot is to aim for any point in the polyhedron . The...between the T-shape and the hole face will cause it to stop there. The black polyhedron is behind and more narrow than the stopping region to account for...motion. If the robot aims for any commanded position in the black polyhedron shown in the figure, then the robot will enter the second hole, slide along

  2. Robotics development for the enhancement of space endeavors

    NASA Astrophysics Data System (ADS)

    Mauceri, A. J.; Clarke, Margaret M.

    Telerobotics and robotics development activities to support NASA's goal of increasing opportunities in space commercialization and exploration are described. The Rockwell International activities center is using robotics to improve efficiency and safety in three related areas: remote control of autonomous systems, automated nondestructive evaluation of aspects of vehicle integrity, and the use of robotics in space vehicle ground reprocessing operations. In the first area, autonomous robotic control, Rockwell is using the control architecture, NASREM, as the foundation for the high level command of robotic tasks. In the second area, we have demonstrated the use of nondestructive evaluation (using acoustic excitation and lasers sensors) to evaluate the integrity of space vehicle surface material bonds, using Orbiter 102 as the test case. In the third area, Rockwell is building an automated version of the present manual tool used for Space Shuttle surface tile re-waterproofing. The tool will be integrated into an orbiter processing robot being developed by a KSC-led team.

  3. Human-Robot Interaction Directed Research Project

    NASA Technical Reports Server (NTRS)

    Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko

    2011-01-01

    Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed studies, participants performance during teleoperation of a robot arm will be compared when they are provided with command-guidance symbology (that is, directing the operator what commands to make) or situation-guidance symbology (that is, providing natural cues so that the operator can infer what commands to make). The second factor for AR symbology is the effects of overlays that are either superimposed or integrated into the external view of the world. A study is proposed in which the effects of superimposed and integrated overlays on operator task performance during teleoperated driving tasks are compared

  4. Kinematic rate control of simulated robot hand at or near wrist singularity

    NASA Technical Reports Server (NTRS)

    Barker, K.; Houck, J. A.; Carzoo, S. W.

    1985-01-01

    A robot hand should obey movement commands from an operator on a computer program as closely as possible. However, when two of the three rotational axes of the robot wrist are colinear, the wrist loses a degree of freedom, and the usual resolved rate equations (used to move the hand in response to an operator's inputs) are indeterminant. Furthermore, rate limiting occurs in close vicinity to this singularity. An analysis shows that rate limiting occurs not only in the vicinity of this singularity but also substantially away from it, even when the operator commands rotational rates of the robot hand that are only a small percentage of the operational joint rate limits. Therefore, joint angle rates are scaled when they exceed operational limits in a real time simulation of a robot arm. Simulation results show that a small dead band avoids the wrist singularity in the resolved rate equations but can introduce a high frequency oscillation close to the singularity. However, when a coordinated wrist movement is used in conjunction with the resolved rate equations, the high frequency oscillation disappears.

  5. Tutorial Workshop on Robotics and Robot Control.

    DTIC Science & Technology

    1982-10-26

    J^V7S US ARMY TANK-AUTOMOTIVE COMMAND, WARREN MICHIGAN US ARMY MATERIEL SYSTEMS ANALYSIS ACTIVITY, ABERDEEN PROVING GROUNDS, MARYLAND ^ V&S...Technology Pasadena, California 91103 M. Vur.kovic Senior Research Associate Institute for Technoeconomic Systems Department of Industrial...Further investigation of the action precedence graphs together with their appli- cation to more complex manipulator tasks and analysis of J2. their

  6. Virts in Cupola

    NASA Image and Video Library

    2015-05-31

    ISS043E276404 (05/31/2015) --- Expedition 43 Commander and NASA astronaut Terry Virts is seen here in the International Space Station’s Cupola module, a 360 degree Earth and space viewing platform. The module also contains a robotic workstation for controlling the station’s main robotic arm, Canadarm2, which is used for a variety of operations including the remote grappling of visiting cargo vehicles.

  7. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  8. Intelligent manipulation technique for multi-branch robotic systems

    NASA Technical Reports Server (NTRS)

    Chen, Alexander Y. K.; Chen, Eugene Y. S.

    1990-01-01

    New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.

  9. What Force and Metrics for What End - Characterizing the Future Leadership and Force

    DTIC Science & Technology

    2006-06-01

    interest of humanity as a whole, and may overrule all other laws whenever it seems necessary for the ultimate good. Source – Asimov , Isaac. “I, Robot...Robotics + the Zeroth Law’ ( Asimov , 2006 Command and Control Research and Technology Symposium ‘The Sate of the Art and the State of the Practice’ ASD...outcomes. Here the author returns to the introduction of the example derived from Asimov (1940, 1970) and Brin (1999) ‘four laws of robotics

  10. A PIC microcontroller-based system for real-life interfacing of external peripherals with a mobile robot

    NASA Astrophysics Data System (ADS)

    Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan

    2010-02-01

    The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.

  11. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    NASA Technical Reports Server (NTRS)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  12. A fault-tolerant intelligent robotic control system

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Tso, Kam Sing

    1993-01-01

    This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.

  13. Direct model reference adaptive control of robotic arms

    NASA Technical Reports Server (NTRS)

    Kaufman, Howard; Swift, David C.; Cummings, Steven T.; Shankey, Jeffrey R.

    1993-01-01

    The results of controlling A PUMA 560 Robotic Manipulator and the NASA shuttle Remote Manipulator System (RMS) using a Command Generator Tracker (CGT) based Model Reference Adaptive Controller (DMRAC) are presented. Initially, the DMRAC algorithm was run in simulation using a detailed dynamic model of the PUMA 560. The algorithm was tuned on the simulation and then used to control the manipulator using minimum jerk trajectories as the desired reference inputs. The ability to track a trajectory in the presence of load changes was also investigated in the simulation. Satisfactory performance was achieved in both simulation and on the actual robot. The obtained responses showed that the algorithm was robust in the presence of sudden load changes. Because these results indicate that the DMRAC algorithm can indeed be successfully applied to the control of robotic manipulators, additional testing was performed to validate the applicability of DMRAC to simulated dynamics of the shuttle RMS.

  14. Hierarchical Robot Control System and Method for Controlling Select Degrees of Freedom of an Object Using Multiple Manipulators

    NASA Technical Reports Server (NTRS)

    Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor)

    2013-01-01

    A robotic system includes a robot having manipulators for grasping an object using one of a plurality of grasp types during a primary task, and a controller. The controller controls the manipulators during the primary task using a multiple-task control hierarchy, and automatically parameterizes the internal forces of the system for each grasp type in response to an input signal. The primary task is defined at an object-level of control, e.g., using a closed-chain transformation, such that only select degrees of freedom are commanded for the object. A control system for the robotic system has a host machine and algorithm for controlling the manipulators using the above hierarchy. A method for controlling the system includes receiving and processing the input signal using the host machine, including defining the primary task at the object-level of control, e.g., using a closed-chain definition, and parameterizing the internal forces for each of grasp type.

  15. Robot Teleoperation and Perception Assistance with a Virtual Holographic Display

    NASA Technical Reports Server (NTRS)

    Goddard, Charles O.

    2012-01-01

    Teleoperation of robots in space from Earth has historically been dfficult. Speed of light delays make direct joystick-type control infeasible, so it is desirable to command a robot in a very high-level fashion. However, in order to provide such an interface, knowledge of what objects are in the robot's environment and how they can be interacted with is required. In addition, many tasks that would be desirable to perform are highly spatial, requiring some form of six degree of freedom input. These two issues can be combined, allowing the user to assist the robot's perception by identifying the locations of objects in the scene. The zSpace system, a virtual holographic environment, provides a virtual three-dimensional space superimposed over real space and a stylus tracking position and rotation inside of it. Using this system, a possible interface for this sort of robot control is proposed.

  16. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  17. Controlling Robots with the Mind.

    ERIC Educational Resources Information Center

    Nicolelis, Miguel A. L.; Chapin, John K.

    2002-01-01

    Reports on research that shows that people with nerve or limb injuries may one day be able to command wheelchairs, prosthetics, and even paralyzed arms and legs by "thinking them through" the motions. (Author/MM)

  18. A task control architecture for autonomous robots

    NASA Technical Reports Server (NTRS)

    Simmons, Reid; Mitchell, Tom

    1990-01-01

    An architecture is presented for controlling robots that have multiple tasks, operate in dynamic domains, and require a fair degree of autonomy. The architecture is built on several layers of functionality, including a distributed communication layer, a behavior layer for querying sensors, expanding goals, and executing commands, and a task level for managing the temporal aspects of planning and achieving goals, coordinating tasks, allocating resources, monitoring, and recovering from errors. Application to a legged planetary rover and an indoor mobile manipulator is described.

  19. Design and control of a macro-micro robot for precise force applications

    NASA Technical Reports Server (NTRS)

    Wang, Yulun; Mangaser, Amante; Laby, Keith; Jordan, Steve; Wilson, Jeff

    1993-01-01

    Creating a robot which can delicately interact with its environment has been the goal of much research. Primarily two difficulties have made this goal hard to attain. The execution of control strategies which enable precise force manipulations are difficult to implement in real time because such algorithms have been too computationally complex for available controllers. Also, a robot mechanism which can quickly and precisely execute a force command is difficult to design. Actuation joints must be sufficiently stiff, frictionless, and lightweight so that desired torques can be accurately applied. This paper describes a robotic system which is capable of delicate manipulations. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system. Delicate force tasks such as polishing, finishing, cleaning, and deburring, are the target applications of the robot.

  20. Technology transfer: Imaging tracker to robotic controller

    NASA Technical Reports Server (NTRS)

    Otaguro, M. S.; Kesler, L. O.; Land, Ken; Erwin, Harry; Rhoades, Don

    1988-01-01

    The transformation of an imaging tracker to a robotic controller is described. A multimode tracker was developed for fire and forget missile systems. The tracker locks on to target images within an acquisition window using multiple image tracking algorithms to provide guidance commands to missile control systems. This basic tracker technology is used with the addition of a ranging algorithm based on sizing a cooperative target to perform autonomous guidance and control of a platform for an Advanced Development Project on automation and robotics. A ranging tracker is required to provide the positioning necessary for robotic control. A simple functional demonstration of the feasibility of this approach was performed and described. More realistic demonstrations are under way at NASA-JSC. In particular, this modified tracker, or robotic controller, will be used to autonomously guide the Man Maneuvering Unit (MMU) to targets such as disabled astronauts or tools as part of the EVA Retriever efforts. It will also be used to control the orbiter's Remote Manipulator Systems (RMS) in autonomous approach and positioning demonstrations. These efforts will also be discussed.

  1. Unmanned ground vehicles for integrated force protection

    NASA Astrophysics Data System (ADS)

    Carroll, Daniel M.; Mikell, Kenneth; Denewiler, Thomas

    2004-09-01

    The combination of Command and Control (C2) systems with Unmanned Ground Vehicles (UGVs) provides Integrated Force Protection from the Robotic Operation Command Center. Autonomous UGVs are directed as Force Projection units. UGV payloads and fixed sensors provide situational awareness while unattended munitions provide a less-than-lethal response capability. Remote resources serve as automated interfaces to legacy physical devices such as manned response vehicles, barrier gates, fence openings, garage doors, and remote power on/off capability for unmanned systems. The Robotic Operations Command Center executes the Multiple Resource Host Architecture (MRHA) to simultaneously control heterogeneous unmanned systems. The MRHA graphically displays video, map, and status for each resource using wireless digital communications for integrated data, video, and audio. Events are prioritized and the user is prompted with audio alerts and text instructions for alarms and warnings. A control hierarchy of missions and duty rosters support autonomous operations. This paper provides an overview of the key technology enablers for Integrated Force Protection with details on a force-on-force scenario to test and demonstrate concept of operations using Unmanned Ground Vehicles. Special attention is given to development and applications for the Remote Detection Challenge and Response (REDCAR) initiative for Integrated Base Defense.

  2. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  3. Telerobotics: methodology for the development of through-the-Internet robotic teleoperated system

    NASA Astrophysics Data System (ADS)

    Alvares, Alberto J.; Caribe de Carvalho, Guilherme; Romariz, Luiz S. J.; Alfaro, Sadek C. A.

    1999-11-01

    This work presents a methodology for the development of Teleoperated Robotic System through Internet. Initially, it is presented a bibliographical review of the telerobotic systems that uses Internet as way of control. The methodology is implemented and tested through the development of two systems. The first is a manipulator with two degrees of freedom commanded remotely through Internet denominated RobWebCam. The second is a system which teleoperates an ABB (Asea Brown Boveri) Industrial Robot of six degrees of freedom denominated RobWebLink.

  4. Planning and Control for Microassembly of Structures Composed of Stress-Engineered MEMS Microrobots

    PubMed Central

    Donald, Bruce R.; Levey, Christopher G.; Paprotny, Igor; Rus, Daniela

    2013-01-01

    We present control strategies that implement planar microassembly using groups of stress-engineered MEMS microrobots (MicroStressBots) controlled through a single global control signal. The global control signal couples the motion of the devices, causing the system to be highly underactuated. In order for the robots to assemble into arbitrary planar shapes despite the high degree of underactuation, it is desirable that each robot be independently maneuverable (independently controllable). To achieve independent control, we fabricated robots that behave (move) differently from one another in response to the same global control signal. We harnessed this differentiation to develop assembly control strategies, where the assembly goal is a desired geometric shape that can be obtained by connecting the chassis of individual robots. We derived and experimentally tested assembly plans that command some of the robots to make progress toward the goal, while other robots are constrained to remain in small circular trajectories (closed-loop orbits) until it is their turn to move into the goal shape. Our control strategies were tested on systems of fabricated MicroStressBots. The robots are 240–280 μm × 60 μm × 7–20 μm in size and move simultaneously within a single operating environment. We demonstrated the feasibility of our control scheme by accurately assembling five different types of planar microstructures. PMID:23580796

  5. Virtual Reality Based Support System for Layout Planning and Programming of an Industrial Robotic Work Cell

    PubMed Central

    Yap, Hwa Jen; Taha, Zahari; Md Dawal, Siti Zawiah; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell. PMID:25360663

  6. Virtual reality based support system for layout planning and programming of an industrial robotic work cell.

    PubMed

    Yap, Hwa Jen; Taha, Zahari; Dawal, Siti Zawiah Md; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell.

  7. Human-Robot Cooperation with Commands Embedded in Actions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kazuki; Yamada, Seiji

    In this paper, we first propose a novel interaction model, CEA (Commands Embedded in Actions). It can explain the way how some existing systems reduce the work-load of their user. We next extend the CEA and build ECEA (Extended CEA) model. The ECEA enables robots to achieve more complicated tasks. On this extension, we employ ACS (Action Coding System) which can describe segmented human acts and clarifies the relationship between user's actions and robot's actions in a task. The ACS utilizes the CEA's strong point which enables a user to send a command to a robot by his/her natural action for the task. The instance of the ECEA led by using the ACS is a temporal extension which has the user keep a final state of a previous his/her action. We apply the temporal extension of the ECEA for a sweeping task. The high-level task, a cooperative task between the user and the robot can be realized. The robot with simple reactive behavior can sweep the region of under an object when the user picks up the object. In addition, we measure user's cognitive loads on the ECEA and a traditional method, DCM (Direct Commanding Method) in the sweeping task, and compare between them. The results show that the ECEA has a lower cognitive load than the DCM significantly.

  8. Command generator tracker based direct model reference adaptive control of a PUMA 560 manipulator. Thesis

    NASA Technical Reports Server (NTRS)

    Swift, David C.

    1992-01-01

    This project dealt with the application of a Direct Model Reference Adaptive Control algorithm to the control of a PUMA 560 Robotic Manipulator. This chapter will present some motivation for using Direct Model Reference Adaptive Control, followed by a brief historical review, the project goals, and a summary of the subsequent chapters.

  9. An integrated dexterous robotic testbed for space applications

    NASA Technical Reports Server (NTRS)

    Li, Larry C.; Nguyen, Hai; Sauer, Edward

    1992-01-01

    An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.

  10. Robot Control Through Brain Computer Interface For Patterns Generation

    NASA Astrophysics Data System (ADS)

    Belluomo, P.; Bucolo, M.; Fortuna, L.; Frasca, M.

    2011-09-01

    A Brain Computer Interface (BCI) system processes and translates neuronal signals, that mainly comes from EEG instruments, into commands for controlling electronic devices. This system can allow people with motor disabilities to control external devices through the real-time modulation of their brain waves. In this context an EEG-based BCI system that allows creative luminous artistic representations is here presented. The system that has been designed and realized in our laboratory interfaces the BCI2000 platform performing real-time analysis of EEG signals with a couple of moving luminescent twin robots. Experiments are also presented.

  11. Single-Command Approach and Instrument Placement by a Robot on a Target

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Cheng, Yang

    2005-01-01

    AUTOAPPROACH is a computer program that enables a mobile robot to approach a target autonomously, starting from a distance of as much as 10 m, in response to a single command. AUTOAPPROACH is used in conjunction with (1) software that analyzes images acquired by stereoscopic cameras aboard the robot and (2) navigation and path-planning software that utilizes odometer readings along with the output of the image-analysis software. Intended originally for application to an instrumented, wheeled robot (rover) in scientific exploration of Mars, AUTOAPPROACH could be adapted to terrestrial applications, notably including the robotic removal of land mines and other unexploded ordnance. A human operator generates the approach command by selecting the target in images acquired by the robot cameras. The approach path consists of multiple legs. Feature points are derived from images that contain the target and are thereafter tracked to correct odometric errors and iteratively refine estimates of the position and orientation of the robot relative to the target on successive legs. The approach is terminated when the robot attains the position and orientation required for placing a scientific instrument at the target. The workspace of the robot arm is then autonomously checked for self/terrain collisions prior to the deployment of the scientific instrument onto the target.

  12. Piezoelectrically Actuated Robotic System for MRI-Guided Prostate Percutaneous Therapy

    PubMed Central

    Su, Hao; Shang, Weijian; Cole, Gregory; Li, Gang; Harrington, Kevin; Camilo, Alexander; Tokuda, Junichi; Tempany, Clare M.; Hata, Nobuhiko; Fischer, Gregory S.

    2014-01-01

    This paper presents a fully-actuated robotic system for percutaneous prostate therapy under continuously acquired live magnetic resonance imaging (MRI) guidance. The system is composed of modular hardware and software to support the surgical workflow of intra-operative MRI-guided surgical procedures. We present the development of a 6-degree-of-freedom (DOF) needle placement robot for transperineal prostate interventions. The robot consists of a 3-DOF needle driver module and a 3-DOF Cartesian motion module. The needle driver provides needle cannula translation and rotation (2-DOF) and stylet translation (1-DOF). A custom robot controller consisting of multiple piezoelectric motor drivers provides precision closed-loop control of piezoelectric motors and enables simultaneous robot motion and MR imaging. The developed modular robot control interface software performs image-based registration, kinematics calculation, and exchanges robot commands and coordinates between the navigation software and the robot controller with a new implementation of the open network communication protocol OpenIGTLink. Comprehensive compatibility of the robot is evaluated inside a 3-Tesla MRI scanner using standard imaging sequences and the signal-to-noise ratio (SNR) loss is limited to 15%. The image deterioration due to the present and motion of robot demonstrates unobservable image interference. Twenty-five targeted needle placements inside gelatin phantoms utilizing an 18-gauge ceramic needle demonstrated 0.87 mm root mean square (RMS) error in 3D Euclidean distance based on MRI volume segmentation of the image-guided robotic needle placement procedure. PMID:26412962

  13. RAPID: Collaborative Commanding and Monitoring of Lunar Assets

    NASA Technical Reports Server (NTRS)

    Torres, Recaredo J.; Mittman, David S.; Powell, Mark W.; Norris, Jeffrey S.; Joswig, Joseph C.; Crockett, Thomas M.; Abramyan, Lucy; Shams, Khawaja S.; Wallick, Michael; Allan, Mark; hide

    2011-01-01

    RAPID (Robot Application Programming Interface Delegate) software utilizes highly robust technology to facilitate commanding and monitoring of lunar assets. RAPID provides the ability for intercenter communication, since these assets are developed in multiple NASA centers. RAPID is targeted at the task of lunar operations; specifically, operations that deal with robotic assets, cranes, and astronaut spacesuits, often developed at different NASA centers. RAPID allows for a uniform way to command and monitor these assets. Commands can be issued to take images, and monitoring is done via telemetry data from the asset. There are two unique features to RAPID: First, it allows any operator from any NASA center to control any NASA lunar asset, regardless of location. Second, by abstracting the native language for specific assets to a common set of messages, an operator may control and monitor any NASA lunar asset by being trained only on the use of RAPID, rather than the specific asset. RAPID is easier to use and more powerful than its predecessor, the Astronaut Interface Device (AID). Utilizing the new robust middleware, DDS (Data Distribution System), developing in RAPID has increased significantly over the old middleware. The API is built upon the Java Eclipse Platform, which combined with DDS, provides platform-independent software architecture, simplifying development of RAPID components. As RAPID continues to evolve and new messages are being designed and implemented, operators for future lunar missions will have a rich environment for commanding and monitoring assets.

  14. Dragon Spacecraft grappled by SSRMS

    NASA Image and Video Library

    2015-04-17

    ISS043E122264 (04/17/2015) --- The Canadarm 2 reaches out to grapple the SpaceX Dragon cargo spacecraft and prepare it to be pulled into its port on the International Space Station. Robotics officers at Mission Control, in the Johnson Space Center Houston Texas will command the Canadarm2 robotic arm to maneuver Dragon to its installation position at the Earth-facing port of the Harmony module where it will reside for the next five weeks.

  15. Dragon crew shots

    NASA Image and Video Library

    2012-10-10

    ISS033-E-011279 (10 Oct. 2012) --- NASA astronaut Sunita Williams, Expedition 33 commander; and Japan Aerospace Exploration Agency astronaut Aki Hoshide, flight engineer, work the controls at the robotics workstation in the International Space Station’s seven-windowed Cupola during the rendezvous and berthing of the SpaceX Dragon commercial cargo craft. Using the Canadarm2 robotic arm, Williams and Hoshide captured and berthed Dragon to the Earth-facing side of the Harmony node Oct. 10, 2012.

  16. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  17. Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators

    PubMed Central

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2013-01-01

    Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations. PMID:23928891

  18. Toward a practical mobile robotic aid system for people with severe physical disabilities.

    PubMed

    Regalbuto, M A; Krouskop, T A; Cheatham, J B

    1992-01-01

    A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.

  19. Final report for LDRD project 11-0783 : directed robots for increased military manpower effectiveness.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohrer, Brandon Robinson; Rothganger, Fredrick H.; Wagner, John S.

    The purpose of this LDRD is to develop technology allowing warfighters to provide high-level commands to their unmanned assets, freeing them to command a group of them or commit the bulk of their attention elsewhere. To this end, a brain-emulating cognition and control architecture (BECCA) was developed, incorporating novel and uniquely capable feature creation and reinforcement learning algorithms. BECCA was demonstrated on both a mobile manipulator platform and on a seven degree of freedom serial link robot arm. Existing military ground robots are almost universally teleoperated and occupy the complete attention of an operator. They may remove a soldier frommore » harm's way, but they do not necessarily reduce manpower requirements. Current research efforts to solve the problem of autonomous operation in an unstructured, dynamic environment fall short of the desired performance. In order to increase the effectiveness of unmanned vehicle (UV) operators, we proposed to develop robots that can be 'directed' rather than remote-controlled. They are instructed and trained by human operators, rather than driven. The technical approach is modeled closely on psychological and neuroscientific models of human learning. Two Sandia-developed models are utilized in this effort: the Sandia Cognitive Framework (SCF), a cognitive psychology-based model of human processes, and BECCA, a psychophysical-based model of learning, motor control, and conceptualization. Together, these models span the functional space from perceptuo-motor abilities, to high-level motivational and attentional processes.« less

  20. iss050e059608

    NASA Image and Video Library

    2017-03-24

    iss050e059608 (03/24/2017) --- NASA astronaut Peggy Whitson controls the robotic arm aboard the International Space Station during a spacewalk. Expedition 50 Commander Shane Kimbrough of NASA and Flight Engineer Thomas Pesquet of ESA (European Space Agency) conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.

  1. Dexterity-Enhanced Telerobotic Microsurgery

    NASA Technical Reports Server (NTRS)

    Charles, Steve; Das, Hari; Ohm, Timothy; Boswell, Curtis; Rodriguez, Guillermo; Steele, Robert; Istrate, Dan

    1997-01-01

    The work reported in this paper is the result, of a collaboration between researchers at the Jet Propulsion Laboratory and Steve Charles, MD, a vitreo-retinal surgeon. The Robot Assisted MicroSurgery (RAMS) telerobotic workstation developed at JPL is a prototype of a system that will be completely under the manual control of a surgeon. The system has a slave robot that will hold surgical instruments. The slave robot motions replicate in six degrees of freedom those of tile. surgeon's hand measured using a master input device with a surgical instrument, shaped handle. The surgeon commands motions for the instrument by moving the handle in the desired trajectories. The trajectories are measured, filtered, and scaled down then used to drive the slave robot.

  2. Controlling a Four-Quadrant Brushless Three-Phase dc Motor

    NASA Technical Reports Server (NTRS)

    Nola, F. J.

    1986-01-01

    Control circuit commutates windings of brushless, three-phase, permanent-magnet motor operating from power supply. With single analog command voltage, controller makes motor accelerate, drive steadily, or brake regeneratively, in clockwise or counterclockwise direction. Controller well suited for use with energy-storage flywheels, actuators for aircraft-control surfaces, cranes, industrial robots, and other electromechanical systems requiring bidirectional control or sudden stopping and reversal.

  3. XBox Input -Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-10-03

    Contains class for connecting to the Xbox 360 controller, displaying the user inputs {buttons, triggers, analog sticks), and controlling the rumble motors. Also contains classes for converting the raw Xbox 360 controller inputs into meaningful commands for the following objects: • Robot arms - Provides joint control and several tool control schemes • UGV's - Provides translational and rotational commands for "skid-steer" vehicles • Pan-tilt units - Provides several modes of control including velocity, position, and point-tracking • Head-mounted displays (HMO)- Controls the viewpoint of a HMO • Umbra frames - Controls the position andorientation of an Umbra posrot objectmore » • Umbra graphics window - Provides several modes of control for the Umbra OSG window viewpoint including free-fly, cursor-focused, and object following.« less

  4. Using a cognitive architecture for general purpose service robot control

    NASA Astrophysics Data System (ADS)

    Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo

    2015-04-01

    A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.

  5. Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, W.J.; Chun, W.H.

    1990-01-01

    The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less

  6. iss053e156180

    NASA Image and Video Library

    2017-11-09

    iss053e156180 (Nov. 9, 2017) --- Expedition 53 Commander Randy Bresnik (foreground) and Flight Engineer Paolo Nespoli are at the controls of the robotics workstation in the Destiny laboratory module training for the approach, rendezvous and grapple of the Orbital ATK Cygnus resupply ship. Both astronauts were in the cupola operating the Canadarm2 robotic arm to grapple Cygnus when it arrived Nov. 14, 2017, delivering nearly 7,400 pounds of crew supplies, science experiments, computer gear, vehicle equipment and spacewalk hardware.

  7. iss053e156160

    NASA Image and Video Library

    2017-11-09

    iss053e156160 (Nov. 9, 2017) --- Expedition 53 Commander Randy Bresnik is at the controls of the robotics workstation in the Destiny laboratory module training for the approach, rendezvous and grapple of the Orbital ATK Cygnus resupply ship. He and Flight Engineer Paolo Nespoli were in the cupola operating the Canadarm2 robotic arm to grapple Cygnus when it arrived Nov. 14, 2017, delivering nearly 7,400 pounds of crew supplies, science experiments, computer gear, vehicle equipment and spacewalk hardware.

  8. A software toolbox for robotics

    NASA Technical Reports Server (NTRS)

    Sanwal, J. C.

    1985-01-01

    A method for programming cooperating manipulators, which is guided by a geometric description of the task to be performed, is given. For this a suitable language must be used and a method for describing the workplace and the objects in it in geometric terms. A task level command language and its implementation for concurrently driven multiple robot arm is described. The language is suitable for driving a cell in which manipulators, end effectors, and sensors are controlled by their own dedicated processors. These processors can communicate with each other through a communication network. A mechanism for keeping track of the history of the commands already executed allows the command language for the manipulators to be event driven. A frame based world modeling system is utilized to describe the objects in the work environment and any relationships that hold between these objects. This system provides a versatile tool for managing information about the world model. Default actions normally needed are invoked when the data base is updated or accessed. Most of the first level error recovery is also invoked by the database by utilizing the concepts of demons. The package can be utilized to generate task level commands in a problem solver or a planner.

  9. Stanford Aerospace Research Laboratory research overview

    NASA Technical Reports Server (NTRS)

    Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.

    1993-01-01

    Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator.

  10. The magic glove: a gesture-based remote controller for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark

    2012-01-01

    This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.

  11. Software for Automation of Real-Time Agents, Version 2

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steve; Chouinard, Caroline; Engelhardt, Barbara; Wilklow, Colette; Mutz, Darren; Knight, Russell; Rabideau, Gregg; hide

    2005-01-01

    Version 2 of Closed Loop Execution and Recovery (CLEaR) has been developed. CLEaR is an artificial intelligence computer program for use in planning and execution of actions of autonomous agents, including, for example, Deep Space Network (DSN) antenna ground stations, robotic exploratory ground vehicles (rovers), robotic aircraft (UAVs), and robotic spacecraft. CLEaR automates the generation and execution of command sequences, monitoring the sequence execution, and modifying the command sequence in response to execution deviations and failures as well as new goals for the agent to achieve. The development of CLEaR has focused on the unification of planning and execution to increase the ability of the autonomous agent to perform under tight resource and time constraints coupled with uncertainty in how much of resources and time will be required to perform a task. This unification is realized by extending the traditional three-tier robotic control architecture by increasing the interaction between the software components that perform deliberation and reactive functions. The increase in interaction reduces the need to replan, enables earlier detection of the need to replan, and enables replanning to occur before an agent enters a state of failure.

  12. Towards a new modality-independent interface for a robotic wheelchair.

    PubMed

    Bastos-Filho, Teodiano Freire; Cheein, Fernando Auat; Müller, Sandra Mara Torres; Celeste, Wanderley Cardoso; de la Cruz, Celso; Cavalieri, Daniel Cruz; Sarcinelli-Filho, Mário; Amaral, Paulo Faria Santos; Perez, Elisa; Soria, Carlos Miguel; Carelli, Ricardo

    2014-05-01

    This work presents the development of a robotic wheelchair that can be commanded by users in a supervised way or by a fully automatic unsupervised navigation system. It provides flexibility to choose different modalities to command the wheelchair, in addition to be suitable for people with different levels of disabilities. Users can command the wheelchair based on their eye blinks, eye movements, head movements, by sip-and-puff and through brain signals. The wheelchair can also operate like an auto-guided vehicle, following metallic tapes, or in an autonomous way. The system is provided with an easy to use and flexible graphical user interface onboard a personal digital assistant, which is used to allow users to choose commands to be sent to the robotic wheelchair. Several experiments were carried out with people with disabilities, and the results validate the developed system as an assistive tool for people with distinct levels of disability.

  13. Human Cognitive Processes in Command and Control Planning. 3. Determining Basic Processes Involved in Planning in Time and Space (Cognitieve Processen in Command and Control Planning. 3. Basisprocessen in Planning in Tijd en Ruimte)

    DTIC Science & Technology

    1991-08-07

    spatidle componenten bevat."De studie had twee do6elen: hei ontwikkelen, van een methode voor bet bepalen van . de cognitieve processen, die met...planning samenhangen en bet ontwikkelen van een model voor efficidnte planning voor de taak gebruikt in deze studie. Twee planners gaven verbale en...grafische protocolfen terwijl ze een planning maakten voor de meest effi-cidnte weg voor winkel-robot’s orn goederen op te halen in een winkel. Voor twaalf

  14. Voice Controlled Wheelchair

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Michael Condon, a quadraplegic from Pasadena, California, demonstrates the NASA-developed voice-controlled wheelchair and its manipulator, which can pick up packages, open doors, turn a TV knob, and perform a variety of other functions. A possible boon to paralyzed and other severely handicapped persons, the chair-manipulator system responds to 35 one-word voice commands, such as "go," "stop," "up," "down," "right," "left," "forward," "backward." The heart of the system is a voice-command analyzer which utilizes a minicomputer. Commands are taught I to the computer by the patient's repeating them a number of times; thereafter the analyzer recognizes commands only in the patient's particular speech pattern. The computer translates commands into electrical signals which activate appropriate motors and cause the desired motion of chair or manipulator. Based on teleoperator and robot technology for space-related programs, the voice-controlled system was developed by Jet Propulsion Laboratory under the joint sponsorship of NASA and the Veterans Administration. The wheelchair-manipulator has been tested at Rancho Los Amigos Hospital, Downey, California, and is being evaluated at the VA Prosthetics Center in New York City.

  15. Robust high-performance control for robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1991-01-01

    Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies have been merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal component that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and a feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nominal feedforward signal.

  16. Robust high-performance control for robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies were merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal componet that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nomical feedforward signal.

  17. Design and implementation of a robot control system with traded and shared control capability

    NASA Technical Reports Server (NTRS)

    Hayati, S.; Venkataraman, S. T.

    1989-01-01

    Preliminary results are reported from efforts to design and develop a robotic system that will accept and execute commands from either a six-axis teleoperator device or an autonomous planner, or combine the two. Such a system should have both traded as well as shared control capability. A sharing strategy is presented whereby the overall system, while retaining positive features of teleoperated and autonomous operation, loses its individual negative features. A two-tiered shared control architecture is considered here, consisting of a task level and a servo level. Also presented is a computer architecture for the implementation of this system, including a description of the hardware and software.

  18. The MITy micro-rover: Sensing, control, and operation

    NASA Technical Reports Server (NTRS)

    Malafeew, Eric; Kaliardos, William

    1994-01-01

    The sensory, control, and operation systems of the 'MITy' Mars micro-rover are discussed. It is shown that the customized sun tracker and laser rangefinder provide internal, autonomous dead reckoning and hazard detection in unstructured environments. The micro-rover consists of three articulated platforms with sensing, processing and payload subsystems connected by a dual spring suspension system. A reactive obstacle avoidance routine makes intelligent use of robot-centered laser information to maneuver through cluttered environments. The hazard sensors include a rangefinder, inclinometers, proximity sensors and collision sensors. A 486/66 laptop computer runs the graphical user interface and programming environment. A graphical window displays robot telemetry in real time and a small TV/VCR is used for real time supervisory control. Guidance, navigation, and control routines work in conjunction with the mapping and obstacle avoidance functions to provide heading and speed commands that maneuver the robot around obstacles and towards the target.

  19. Humanoid Robotics: Real-Time Object Oriented Programming

    NASA Technical Reports Server (NTRS)

    Newton, Jason E.

    2005-01-01

    Programming of robots in today's world is often done in a procedural oriented fashion, where object oriented programming is not incorporated. In order to keep a robust architecture allowing for easy expansion of capabilities and a truly modular design, object oriented programming is required. However, concepts in object oriented programming are not typically applied to a real time environment. The Fujitsu HOAP-2 is the test bed for the development of a humanoid robot framework abstracting control of the robot into simple logical commands in a real time robotic system while allowing full access to all sensory data. In addition to interfacing between the motor and sensory systems, this paper discusses the software which operates multiple independently developed control systems simultaneously and the safety measures which keep the humanoid from damaging itself and its environment while running these systems. The use of this software decreases development time and costs and allows changes to be made while keeping results safe and predictable.

  20. Augmented reality and haptic interfaces for robot-assisted surgery.

    PubMed

    Yamamoto, Tomonori; Abolhassani, Niki; Jung, Sung; Okamura, Allison M; Judkins, Timothy N

    2012-03-01

    Current teleoperated robot-assisted minimally invasive surgical systems do not take full advantage of the potential performance enhancements offered by various forms of haptic feedback to the surgeon. Direct and graphical haptic feedback systems can be integrated with vision and robot control systems in order to provide haptic feedback to improve safety and tissue mechanical property identification. An interoperable interface for teleoperated robot-assisted minimally invasive surgery was developed to provide haptic feedback and augmented visual feedback using three-dimensional (3D) graphical overlays. The software framework consists of control and command software, robot plug-ins, image processing plug-ins and 3D surface reconstructions. The feasibility of the interface was demonstrated in two tasks performed with artificial tissue: palpation to detect hard lumps and surface tracing, using vision-based forbidden-region virtual fixtures to prevent the patient-side manipulator from entering unwanted regions of the workspace. The interoperable interface enables fast development and successful implementation of effective haptic feedback methods in teleoperation. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Position calibration of a 3-DOF hand-controller with hybrid structure

    NASA Astrophysics Data System (ADS)

    Zhu, Chengcheng; Song, Aiguo

    2017-09-01

    A hand-controller is a human-robot interactive device, which measures the 3-DOF (Degree of Freedom) position of the human hand and sends it as a command to control robot movement. The device also receives 3-DOF force feedback from the robot and applies it to the human hand. Thus, the precision of 3-DOF position measurements is a key performance factor for hand-controllers. However, when using a hybrid type 3-DOF hand controller, various errors occur and are considered originating from machining and assembly variations within the device. This paper presents a calibration method to improve the position tracking accuracy of hybrid type hand-controllers by determining the actual size of the hand-controller parts. By re-measuring and re-calibrating this kind of hand-controller, the actual size of the key parts that cause errors is determined. Modifying the formula parameters with the actual sizes, which are obtained in the calibrating process, improves the end position tracking accuracy of the device.

  2. Motor prediction in Brain-Computer Interfaces for controlling mobile robots.

    PubMed

    Geng, Tao; Gan, John Q

    2008-01-01

    EEG-based Brain-Computer Interface (BCI) can be regarded as a new channel for motor control except that it does not involve muscles. Normal neuromuscular motor control has two fundamental components: (1) to control the body, and (2) to predict the consequences of the control command, which is called motor prediction. In this study, after training with a specially designed BCI paradigm based on motor imagery, two subjects learnt to predict the time course of some features of the EEG signals. It is shown that, with this newly-obtained motor prediction skill, subjects can use motor imagery of feet to directly control a mobile robot to avoid obstacles and reach a small target in a time-critical scenario.

  3. Adaptation mechanism of interlimb coordination in human split-belt treadmill walking through learning of foot contact timing: a robotics study

    PubMed Central

    Fujiki, Soichiro; Aoi, Shinya; Funato, Tetsuro; Tomita, Nozomi; Senda, Kei; Tsuchiya, Kazuo

    2015-01-01

    Human walking behaviour adaptation strategies have previously been examined using split-belt treadmills, which have two parallel independently controlled belts. In such human split-belt treadmill walking, two types of adaptations have been identified: early and late. Early-type adaptations appear as rapid changes in interlimb and intralimb coordination activities when the belt speeds of the treadmill change between tied (same speed for both belts) and split-belt (different speeds for each belt) configurations. By contrast, late-type adaptations occur after the early-type adaptations as a gradual change and only involve interlimb coordination. Furthermore, interlimb coordination shows after-effects that are related to these adaptations. It has been suggested that these adaptations are governed primarily by the spinal cord and cerebellum, but the underlying mechanism remains unclear. Because various physiological findings suggest that foot contact timing is crucial to adaptive locomotion, this paper reports on the development of a two-layered control model for walking composed of spinal and cerebellar models, and on its use as the focus of our control model. The spinal model generates rhythmic motor commands using an oscillator network based on a central pattern generator and modulates the commands formulated in immediate response to foot contact, while the cerebellar model modifies motor commands through learning based on error information related to differences between the predicted and actual foot contact timings of each leg. We investigated adaptive behaviour and its mechanism by split-belt treadmill walking experiments using both computer simulations and an experimental bipedal robot. Our results showed that the robot exhibited rapid changes in interlimb and intralimb coordination that were similar to the early-type adaptations observed in humans. In addition, despite the lack of direct interlimb coordination control, gradual changes and after-effects in the interlimb coordination appeared in a manner that was similar to the late-type adaptations and after-effects observed in humans. The adaptation results of the robot were then evaluated in comparison with human split-belt treadmill walking, and the adaptation mechanism was clarified from a dynamic viewpoint. PMID:26289658

  4. Adaptation mechanism of interlimb coordination in human split-belt treadmill walking through learning of foot contact timing: a robotics study.

    PubMed

    Fujiki, Soichiro; Aoi, Shinya; Funato, Tetsuro; Tomita, Nozomi; Senda, Kei; Tsuchiya, Kazuo

    2015-09-06

    Human walking behaviour adaptation strategies have previously been examined using split-belt treadmills, which have two parallel independently controlled belts. In such human split-belt treadmill walking, two types of adaptations have been identified: early and late. Early-type adaptations appear as rapid changes in interlimb and intralimb coordination activities when the belt speeds of the treadmill change between tied (same speed for both belts) and split-belt (different speeds for each belt) configurations. By contrast, late-type adaptations occur after the early-type adaptations as a gradual change and only involve interlimb coordination. Furthermore, interlimb coordination shows after-effects that are related to these adaptations. It has been suggested that these adaptations are governed primarily by the spinal cord and cerebellum, but the underlying mechanism remains unclear. Because various physiological findings suggest that foot contact timing is crucial to adaptive locomotion, this paper reports on the development of a two-layered control model for walking composed of spinal and cerebellar models, and on its use as the focus of our control model. The spinal model generates rhythmic motor commands using an oscillator network based on a central pattern generator and modulates the commands formulated in immediate response to foot contact, while the cerebellar model modifies motor commands through learning based on error information related to differences between the predicted and actual foot contact timings of each leg. We investigated adaptive behaviour and its mechanism by split-belt treadmill walking experiments using both computer simulations and an experimental bipedal robot. Our results showed that the robot exhibited rapid changes in interlimb and intralimb coordination that were similar to the early-type adaptations observed in humans. In addition, despite the lack of direct interlimb coordination control, gradual changes and after-effects in the interlimb coordination appeared in a manner that was similar to the late-type adaptations and after-effects observed in humans. The adaptation results of the robot were then evaluated in comparison with human split-belt treadmill walking, and the adaptation mechanism was clarified from a dynamic viewpoint. © 2015 The Authors.

  5. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031125 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  6. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031124 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  7. Robonaut 2 in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-02

    ISS034-E-013990 (2 Jan. 2013) --- In the International Space Station’s Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  8. Performance and Usability of Various Robotic Arm Control Modes from Human Force Signals

    PubMed Central

    Mick, Sébastien; Cattaert, Daniel; Paclet, Florent; Oudeyer, Pierre-Yves; de Rugy, Aymar

    2017-01-01

    Elaborating an efficient and usable mapping between input commands and output movements is still a key challenge for the design of robotic arm prostheses. In order to address this issue, we present and compare three different control modes, by assessing them in terms of performance as well as general usability. Using an isometric force transducer as the command device, these modes convert the force input signal into either a position or a velocity vector, whose magnitude is linearly or quadratically related to force input magnitude. With the robotic arm from the open source 3D-printed Poppy Humanoid platform simulating a mobile prosthesis, an experiment was carried out with eighteen able-bodied subjects performing a 3-D target-reaching task using each of the three modes. The subjects were given questionnaires to evaluate the quality of their experience with each mode, providing an assessment of their global usability in the context of the task. According to performance metrics and questionnaire results, velocity control modes were found to perform better than position control mode in terms of accuracy and quality of control as well as user satisfaction and comfort. Subjects also seemed to favor quadratic velocity control over linear (proportional) velocity control, even if these two modes did not clearly distinguish from one another when it comes to performance and usability assessment. These results highlight the need to take into account user experience as one of the key criteria for the design of control modes intended to operate limb prostheses. PMID:29118699

  9. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogunmolu, O; Gans, N; Jiang, S

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less

  10. Self-Organizing Map With Time-Varying Structure to Plan and Control Artificial Locomotion.

    PubMed

    Araujo, Aluizio F R; Santana, Orivaldo V

    2015-08-01

    This paper presents an algorithm, self-organizing map-state trajectory generator (SOM-STG), to plan and control legged robot locomotion. The SOM-STG is based on an SOM with a time-varying structure characterized by constructing autonomously close-state trajectories from an arbitrary number of robot postures. Each trajectory represents a cyclical movement of the limbs of an animal. The SOM-STG was designed to possess important features of a central pattern generator, such as rhythmic pattern generation, synchronization between limbs, and swapping between gaits following a single command. The acquisition of data for SOM-STG is based on learning by demonstration in which the data are obtained from different demonstrator agents. The SOM-STG can construct one or more gaits for a simulated robot with six legs, can control the robot with any of the gaits learned, and can smoothly swap gaits. In addition, SOM-STG can learn to construct a state trajectory form observing an animal in locomotion. In this paper, a dog is the demonstrator agent.

  11. Haptic/graphic rehabilitation: integrating a robot into a virtual environment library and applying it to stroke therapy.

    PubMed

    Sharp, Ian; Patton, James; Listenberger, Molly; Case, Emily

    2011-08-08

    Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.

  12. Behavior coordination of mobile robotics using supervisory control of fuzzy discrete event systems.

    PubMed

    Jayasiri, Awantha; Mann, George K I; Gosine, Raymond G

    2011-10-01

    In order to incorporate the uncertainty and impreciseness present in real-world event-driven asynchronous systems, fuzzy discrete event systems (DESs) (FDESs) have been proposed as an extension to crisp DESs. In this paper, first, we propose an extension to the supervisory control theory of FDES by redefining fuzzy controllable and uncontrollable events. The proposed supervisor is capable of enabling feasible uncontrollable and controllable events with different possibilities. Then, the extended supervisory control framework of FDES is employed to model and control several navigational tasks of a mobile robot using the behavior-based approach. The robot has limited sensory capabilities, and the navigations have been performed in several unmodeled environments. The reactive and deliberative behaviors of the mobile robotic system are weighted through fuzzy uncontrollable and controllable events, respectively. By employing the proposed supervisory controller, a command-fusion-type behavior coordination is achieved. The observability of fuzzy events is incorporated to represent the sensory imprecision. As a systematic analysis of the system, a fuzzy-state-based controllability measure is introduced. The approach is implemented in both simulation and real time. A performance evaluation is performed to quantitatively estimate the validity of the proposed approach over its counterparts.

  13. Multi-Window Controllers for Autonomous Space Systems

    NASA Technical Reports Server (NTRS)

    Lurie, B, J.; Hadaegh, F. Y.

    1997-01-01

    Multi-window controllers select between elementary linear controllers using nonlinear windows based on the amplitude and frequency content of the feedback error. The controllers are relatively simple to implement and perform much better than linear controllers. The commanders for such controllers only order the destination point and are freed from generating the command time-profiles. The robotic missions rely heavily on the tasks of acquisition and tracking. For autonomous and optimal control of the spacecraft, the control bandwidth must be larger while the feedback can (and, therefore, must) be reduced.. Combining linear compensators via multi-window nonlinear summer guarantees minimum phase character of the combined transfer function. It is shown that the solution may require using several parallel branches and windows. Several examples of multi-window nonlinear controller applications are presented.

  14. Extending human proprioception to cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David

    2016-04-01

    Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.

  15. Supervised Remote Robot with Guided Autonomy and Teleoperation (SURROGATE): A Framework for Whole-Body Manipulation

    NASA Technical Reports Server (NTRS)

    Hebert, Paul; Ma, Jeremy; Borders, James; Aydemir, Alper; Bajracharya, Max; Hudson, Nicolas; Shankar, Krishna; Karumanchi, Sisir; Douillard, Bertrand; Burdick, Joel

    2015-01-01

    The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called "supervised-autonomy" is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a "Supervised Remote Robot with Guided Autonomy and Teleoperation" (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of "behaviors" to chain together sequences of "actions" for the robot to perform which is then executed real time.

  16. A networked modular hardware and software system for MRI-guided robotic prostate interventions

    NASA Astrophysics Data System (ADS)

    Su, Hao; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Cole, Gregory; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare; Fischer, Gregory S.

    2012-02-01

    Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast, and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has a better than 0.01mm positioning accuracy.

  17. Discrete event command and control for networked teams with multiple missions

    NASA Astrophysics Data System (ADS)

    Lewis, Frank L.; Hudas, Greg R.; Pang, Chee Khiang; Middleton, Matthew B.; McMurrough, Christopher

    2009-05-01

    During mission execution in military applications, the TRADOC Pamphlet 525-66 Battle Command and Battle Space Awareness capabilities prescribe expectations that networked teams will perform in a reliable manner under changing mission requirements, varying resource availability and reliability, and resource faults. In this paper, a Command and Control (C2) structure is presented that allows for computer-aided execution of the networked team decision-making process, control of force resources, shared resource dispatching, and adaptability to change based on battlefield conditions. A mathematically justified networked computing environment is provided called the Discrete Event Control (DEC) Framework. DEC has the ability to provide the logical connectivity among all team participants including mission planners, field commanders, war-fighters, and robotic platforms. The proposed data management tools are developed and demonstrated on a simulation study and an implementation on a distributed wireless sensor network. The results show that the tasks of multiple missions are correctly sequenced in real-time, and that shared resources are suitably assigned to competing tasks under dynamically changing conditions without conflicts and bottlenecks.

  18. Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot.

    PubMed

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2016-09-22

    Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot's body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI's potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject's own body.

  19. Hands-free device control using sound picked up in the ear canal

    NASA Astrophysics Data System (ADS)

    Chhatpar, Siddharth R.; Ngia, Lester; Vlach, Chris; Lin, Dong; Birkhimer, Craig; Juneja, Amit; Pruthi, Tarun; Hoffman, Orin; Lewis, Tristan

    2008-04-01

    Hands-free control of unmanned ground vehicles is essential for soldiers, bomb disposal squads, and first responders. Having their hands free for other equipment and tasks allows them to be safer and more mobile. Currently, the most successful hands-free control devices are speech-command based. However, these devices use external microphones, and in field environments, e.g., war zones and fire sites, their performance suffers because of loud ambient noise: typically above 90dBA. This paper describes the development of technology using the ear as an output source that can provide excellent command recognition accuracy even in noisy environments. Instead of picking up speech radiating from the mouth, this technology detects speech transmitted internally through the ear canal. Discreet tongue movements also create air pressure changes within the ear canal, and can be used for stealth control. A patented earpiece was developed with a microphone pointed into the ear canal that captures these signals generated by tongue movements and speech. The signals are transmitted from the earpiece to an Ultra-Mobile Personal Computer (UMPC) through a wired connection. The UMPC processes the signals and utilizes them for device control. The processing can include command recognition, ambient noise cancellation, acoustic echo cancellation, and speech equalization. Successful control of an iRobot PackBot has been demonstrated with both speech (13 discrete commands) and tongue (5 discrete commands) signals. In preliminary tests, command recognition accuracy was 95% with speech control and 85% with tongue control.

  20. A decade of telerobotics in rehabilitation: Demonstrated utility blocked by the high cost of manipulation and the complexity of the user interface

    NASA Technical Reports Server (NTRS)

    Leifer, Larry; Michalowski, Stefan; Vanderloos, Machiel

    1991-01-01

    The Stanford/VA Interactive Robotics Laboratory set out in 1978 to test the hypothesis that industrial robotics technology could be applied to serve the manipulation needs of severely impaired individuals. Five generations of hardware, three generations of system software, and over 125 experimental subjects later, we believe that genuine utility is achievable. The experience includes development of over 65 task applications using voiced command, joystick control, natural language command and 3D object designation technology. A brief foray into virtual environments, using flight simulator technology, was instructive. If reality and virtuality come for comparable prices, you cannot beat reality. A detailed review of assistive robot anatomy and the performance specifications needed to achieve cost/beneficial utility will be used to support discussion of the future of rehabilitation telerobotics. Poised on the threshold of commercial viability, but constrained by the high cost of technically adequate manipulators, this worthy application domain flounders temporarily. In the long run, it will be the user interface that governs utility.

  1. Hardware platform for multiple mobile robots

    NASA Astrophysics Data System (ADS)

    Parzhuber, Otto; Dolinsky, D.

    2004-12-01

    This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.

  2. Results from Testing Crew-Controlled Surface Telerobotics on the International Space Station

    NASA Technical Reports Server (NTRS)

    Bualat, Maria; Schreckenghost, Debra; Pacis, Estrellina; Fong, Terrence; Kalar, Donald; Beutter, Brent

    2014-01-01

    During Summer 2013, the Intelligent Robotics Group at NASA Ames Research Center conducted a series of tests to examine how astronauts in the International Space Station (ISS) can remotely operate a planetary rover. The tests simulated portions of a proposed lunar mission, in which an astronaut in lunar orbit would remotely operate a planetary rover to deploy a radio telescope on the lunar far side. Over the course of Expedition 36, three ISS astronauts remotely operated the NASA "K10" planetary rover in an analogue lunar terrain located at the NASA Ames Research Center in California. The astronauts used a "Space Station Computer" (crew laptop), a combination of supervisory control (command sequencing) and manual control (discrete commanding), and Ku-band data communications to command and monitor K10 for 11 hours. In this paper, we present and analyze test results, summarize user feedback, and describe directions for future research.

  3. Applications of artificial intelligence to space station and automated software techniques: High level robot command language

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1989-01-01

    The objective is to develop a system that will allow a person not necessarily skilled in the art of programming robots to quickly and naturally create the necessary data and commands to enable a robot to perform a desired task. The system will use a menu driven graphical user interface. This interface will allow the user to input data to select objects to be moved. There will be an imbedded expert system to process the knowledge about objects and the robot to determine how they are to be moved. There will be automatic path planning to avoid obstacles in the work space and to create a near optimum path. The system will contain the software to generate the required robot instructions.

  4. Indirect decentralized repetitive control

    NASA Technical Reports Server (NTRS)

    Lee, Soo Cheol; Longman, Richard W.

    1993-01-01

    Learning control refers to controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented a theory of indirect decentralized learning control based on use of indirect adaptive control concepts employing simultaneous identification and control. This paper extends these results to apply to the indirect repetitive control problem in which a periodic (i.e., repetitive) command is given to a control system. Decentralized indirect repetitive control algorithms are presented that have guaranteed convergence to zero tracking error under very general conditions. The original motivation of the repetitive control and learning control fields was learning in robots doing repetitive tasks such as on an assembly line. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the desired trajectory. Decentralized repetitive control is natural for this application because the feedback control for link rotations is normally implemented in a decentralized manner, treating each link as if it is independent of the other links.

  5. Human-Robot Interaction

    NASA Technical Reports Server (NTRS)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  6. A Computational Model of Spatial Development

    NASA Astrophysics Data System (ADS)

    Hiraki, Kazuo; Sashima, Akio; Phillips, Steven

    Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.

  7. Easy robot programming for beginners and kids using augmented reality environments

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Nishiguchi, Masahiro

    2010-11-01

    The authors have developed the mobile robot which can be programmed by command and instruction cards. All you have to do is to arrange cards on a table and to shot the programming stage by a camera. Our card programming system recognizes instruction cards and translates icon commands into the motor driver program. This card programming environment also provides low-level structure programming.

  8. A brain-controlled lower-limb exoskeleton for human gait training.

    PubMed

    Liu, Dong; Chen, Weihai; Pei, Zhongcai; Wang, Jianhua

    2017-10-01

    Brain-computer interfaces have been a novel approach to translate human intentions into movement commands in robotic systems. This paper describes an electroencephalogram-based brain-controlled lower-limb exoskeleton for gait training, as a proof of concept towards rehabilitation with human-in-the-loop. Instead of using conventional single electroencephalography correlates, e.g., evoked P300 or spontaneous motor imagery, we propose a novel framework integrated two asynchronous signal modalities, i.e., sensorimotor rhythms (SMRs) and movement-related cortical potentials (MRCPs). We executed experiments in a biologically inspired and customized lower-limb exoskeleton where subjects (N = 6) actively controlled the robot using their brain signals. Each subject performed three consecutive sessions composed of offline training, online visual feedback testing, and online robot-control recordings. Post hoc evaluations were conducted including mental workload assessment, feature analysis, and statistics test. An average robot-control accuracy of 80.16% ± 5.44% was obtained with the SMR-based method, while estimation using the MRCP-based method yielded an average performance of 68.62% ± 8.55%. The experimental results showed the feasibility of the proposed framework with all subjects successfully controlled the exoskeleton. The current paradigm could be further extended to paraplegic patients in clinical trials.

  9. A brain-controlled lower-limb exoskeleton for human gait training

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Chen, Weihai; Pei, Zhongcai; Wang, Jianhua

    2017-10-01

    Brain-computer interfaces have been a novel approach to translate human intentions into movement commands in robotic systems. This paper describes an electroencephalogram-based brain-controlled lower-limb exoskeleton for gait training, as a proof of concept towards rehabilitation with human-in-the-loop. Instead of using conventional single electroencephalography correlates, e.g., evoked P300 or spontaneous motor imagery, we propose a novel framework integrated two asynchronous signal modalities, i.e., sensorimotor rhythms (SMRs) and movement-related cortical potentials (MRCPs). We executed experiments in a biologically inspired and customized lower-limb exoskeleton where subjects (N = 6) actively controlled the robot using their brain signals. Each subject performed three consecutive sessions composed of offline training, online visual feedback testing, and online robot-control recordings. Post hoc evaluations were conducted including mental workload assessment, feature analysis, and statistics test. An average robot-control accuracy of 80.16% ± 5.44% was obtained with the SMR-based method, while estimation using the MRCP-based method yielded an average performance of 68.62% ± 8.55%. The experimental results showed the feasibility of the proposed framework with all subjects successfully controlled the exoskeleton. The current paradigm could be further extended to paraplegic patients in clinical trials.

  10. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  11. From path models to commands during additive printing of large-scale architectural designs

    NASA Astrophysics Data System (ADS)

    Chepchurov, M. S.; Zhukov, E. M.; Yakovlev, E. A.; Matveykin, V. G.

    2018-05-01

    The article considers the problem of automation of the formation of large complex parts, products and structures, especially for unique or small-batch objects produced by a method of additive technology [1]. Results of scientific research in search for the optimal design of a robotic complex, its modes of operation (work), structure of its control helped to impose the technical requirements on the technological process for manufacturing and design installation of the robotic complex. Research on virtual models of the robotic complexes allowed defining the main directions of design improvements and the main goal (purpose) of testing of the the manufactured prototype: checking the positioning accuracy of the working part.

  12. IntelliTable: Inclusively-Designed Furniture with Robotic Capabilities.

    PubMed

    Prescott, Tony J; Conran, Sebastian; Mitchinson, Ben; Cudd, Peter

    2017-01-01

    IntelliTable is a new proof-of-principle assistive technology system with robotic capabilities in the form of an elegant universal cantilever table able to move around by itself, or under user control. We describe the design and current capabilities of the table and the human-centered design methodology used in its development and initial evaluation. The IntelliTable study has delivered robotic platform programmed by a smartphone that can navigate around a typical home or care environment, avoiding obstacles, and positioning itself at the user's command. It can also be configured to navigate itself to pre-ordained places positions within an environment using ceiling tracking, responsive optical guidance and object-based sonar navigation.

  13. Blind speech separation system for humanoid robot with FastICA for audio filtering and separation

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Santoso Gunawan, Alexander Agung

    2016-07-01

    Nowadays, there are many developments in building intelligent humanoid robot, mainly in order to handle voice and image. In this research, we propose blind speech separation system using FastICA for audio filtering and separation that can be used in education or entertainment. Our main problem is to separate the multi speech sources and also to filter irrelevant noises. After speech separation step, the results will be integrated with our previous speech and face recognition system which is based on Bioloid GP robot and Raspberry Pi 2 as controller. The experimental results show the accuracy of our blind speech separation system is about 88% in command and query recognition cases.

  14. Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations

    NASA Technical Reports Server (NTRS)

    Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana

    2011-01-01

    We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.

  15. Determining robot actions for tasks requiring sensor interaction

    NASA Technical Reports Server (NTRS)

    Budenske, John; Gini, Maria

    1989-01-01

    The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.

  16. Human-Derived Disturbance Estimation and Compensation (DEC) Method Lends Itself to a Modular Sensorimotor Control in a Humanoid Robot.

    PubMed

    Lippi, Vittorio; Mergner, Thomas

    2017-01-01

    The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking.

  17. Human-Derived Disturbance Estimation and Compensation (DEC) Method Lends Itself to a Modular Sensorimotor Control in a Humanoid Robot

    PubMed Central

    Lippi, Vittorio; Mergner, Thomas

    2017-01-01

    The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking. PMID:28951719

  18. Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing.

    PubMed

    Leonard, Simon; Wu, Kyle L; Kim, Yonjae; Krieger, Axel; Kim, Peter C W

    2014-04-01

    This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360(°)®, and nine times faster than surgeons using manual laparoscopic tools.

  19. Development of a force-reflecting robotic platform for cardiac catheter navigation.

    PubMed

    Park, Jun Woo; Choi, Jaesoon; Pak, Hui-Nam; Song, Seung Joon; Lee, Jung Chan; Park, Yongdoo; Shin, Seung Min; Sun, Kyung

    2010-11-01

    Electrophysiological catheters are used for both diagnostics and clinical intervention. To facilitate more accurate and precise catheter navigation, robotic cardiac catheter navigation systems have been developed and commercialized. The authors have developed a novel force-reflecting robotic catheter navigation system. The system is a network-based master-slave configuration having a 3-degree of freedom robotic manipulator for operation with a conventional cardiac ablation catheter. The master manipulator implements a haptic user interface device with force feedback using a force or torque signal either measured with a sensor or estimated from the motor current signal in the slave manipulator. The slave manipulator is a robotic motion control platform on which the cardiac ablation catheter is mounted. The catheter motions-forward and backward movements, rolling, and catheter tip bending-are controlled by electromechanical actuators located in the slave manipulator. The control software runs on a real-time operating system-based workstation and implements the master/slave motion synchronization control of the robot system. The master/slave motion synchronization response was assessed with step, sinusoidal, and arbitrarily varying motion commands, and showed satisfactory performance with insignificant steady-state motion error. The current system successfully implemented the motion control function and will undergo safety and performance evaluation by means of animal experiments. Further studies on the force feedback control algorithm and on an active motion catheter with an embedded actuation mechanism are underway. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  20. Command Recognition of Robot with Low Dimension Whole-Body Haptic Sensor

    NASA Astrophysics Data System (ADS)

    Ito, Tatsuya; Tsuji, Toshiaki

    The authors have developed “haptic armor”, a whole-body haptic sensor that has an ability to estimate contact position. Although it is developed for safety assurance of robots in human environment, it can also be used as an interface. This paper proposes a command recognition method based on finger trace information. This paper also discusses some technical issues for improving recognition accuracy of this system.

  1. SWARMs Ontology: A Common Information Model for the Cooperation of Underwater Robots.

    PubMed

    Li, Xin; Bilbao, Sonia; Martín-Wanton, Tamara; Bastos, Joaquim; Rodriguez, Jonathan

    2017-03-11

    In order to facilitate cooperation between underwater robots, it is a must for robots to exchange information with unambiguous meaning. However, heterogeneity, existing in information pertaining to different robots, is a major obstruction. Therefore, this paper presents a networked ontology, named the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) ontology, to address information heterogeneity and enable robots to have the same understanding of exchanged information. The SWARMs ontology uses a core ontology to interrelate a set of domain-specific ontologies, including the mission and planning, the robotic vehicle, the communication and networking, and the environment recognition and sensing ontology. In addition, the SWARMs ontology utilizes ontology constructs defined in the PR-OWL ontology to annotate context uncertainty based on the Multi-Entity Bayesian Network (MEBN) theory. Thus, the SWARMs ontology can provide both a formal specification for information that is necessarily exchanged between robots and a command and control entity, and also support for uncertainty reasoning. A scenario on chemical pollution monitoring is described and used to showcase how the SWARMs ontology can be instantiated, be extended, represent context uncertainty, and support uncertainty reasoning.

  2. ROMPS critical design review. Volume 2: Robot module design documentation

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1992-01-01

    The robot module design documentation for the Remote Operated Materials Processing in Space (ROMPS) experiment is compiled. This volume presents the following information: robot module modifications; Easylab commands definitions and flowcharts; Easylab program definitions and flowcharts; robot module fault conditions and structure charts; and C-DOC flow structure and cross references.

  3. Development of a teaching system for an industrial robot using stereo vision

    NASA Astrophysics Data System (ADS)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  4. Integration of robotic resources into FORCEnet

    NASA Astrophysics Data System (ADS)

    Nguyen, Chinh; Carroll, Daniel; Nguyen, Hoa

    2006-05-01

    The Networked Intelligence, Surveillance, and Reconnaissance (NISR) project integrates robotic resources into Composeable FORCEnet to control and exploit unmanned systems over extremely long distances. The foundations are built upon FORCEnet-the U.S. Navy's process to define C4ISR for net-centric operations-and the Navy Unmanned Systems Common Control Roadmap to develop technologies and standards for interoperability, data sharing, publish-and-subscribe methodology, and software reuse. The paper defines the goals and boundaries for NISR with focus on the system architecture, including the design tradeoffs necessary for unmanned systems in a net-centric model. Special attention is given to two specific scenarios demonstrating the integration of unmanned ground and water surface vehicles into the open-architecture web-based command-and-control information-management system of Composeable FORCEnet. Planned spiral development for NISR will improve collaborative control, expand robotic sensor capabilities, address multiple domains including underwater and aerial platforms, and extend distributive communications infrastructure for battlespace optimization for unmanned systems in net-centric operations.

  5. Center of excellence for small robots

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa G.; Carroll, Daniel M.; Laird, Robin T.; Everett, H. R.

    2005-05-01

    The mission of the Unmanned Systems Branch of SPAWAR Systems Center, San Diego (SSC San Diego) is to provide network-integrated robotic solutions for Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) applications, serving and partnering with industry, academia, and other government agencies. We believe the most important criterion for a successful acquisition program is producing a value-added end product that the warfighter needs, uses and appreciates. Through our accomplishments in the laboratory and field, SSC San Diego has been designated the Center of Excellence for Small Robots by the Office of the Secretary of Defense Joint Robotics Program. This paper covers the background, experience, and collaboration efforts by SSC San Diego to serve as the "Impedance-Matching Transformer" between the robotic user and technical communities. Special attention is given to our Unmanned Systems Technology Imperatives for Research, Development, Testing and Evaluation (RDT&E) of Small Robots. Active projects, past efforts, and architectures are provided as success stories for the Unmanned Systems Development Approach.

  6. Interactive Exploration Robots: Human-Robotic Collaboration and Interactions

    NASA Technical Reports Server (NTRS)

    Fong, Terry

    2017-01-01

    For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.

  7. Robot Sequencing and Visualization Program (RSVP)

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Maxwell,Scott A.; Hartman, Frank R.; Wright, John R.; Yen, Jeng; Toole, Nicholas T.; Gorjian, Zareh; Morrison, Jack C

    2013-01-01

    The Robot Sequencing and Visualization Program (RSVP) is being used in the Mars Science Laboratory (MSL) mission for downlink data visualization and command sequence generation. RSVP reads and writes downlink data products from the operations data server (ODS) and writes uplink data products to the ODS. The primary users of RSVP are members of the Rover Planner team (part of the Integrated Planning and Execution Team (IPE)), who use it to perform traversability/articulation analyses, take activity plan input from the Science and Mission Planning teams, and create a set of rover sequences to be sent to the rover every sol. The primary inputs to RSVP are downlink data products and activity plans in the ODS database. The primary outputs are command sequences to be placed in the ODS for further processing prior to uplink to each rover. RSVP is composed of two main subsystems. The first, called the Robot Sequence Editor (RoSE), understands the MSL activity and command dictionaries and takes care of converting incoming activity level inputs into command sequences. The Rover Planners use the RoSE component of RSVP to put together command sequences and to view and manage command level resources like time, power, temperature, etc. (via a transparent realtime connection to SEQGEN). The second component of RSVP is called HyperDrive, a set of high-fidelity computer graphics displays of the Martian surface in 3D and in stereo. The Rover Planners can explore the environment around the rover, create commands related to motion of all kinds, and see the simulated result of those commands via its underlying tight coupling with flight navigation, motor, and arm software. This software is the evolutionary replacement for the Rover Sequencing and Visualization software used to create command sequences (and visualize the Martian surface) for the Mars Exploration Rover mission.

  8. Remote mission specialist - A study in real-time, adaptive planning

    NASA Technical Reports Server (NTRS)

    Rokey, Mark J.

    1990-01-01

    A high-level planning architecture for robotic operations is presented. The remote mission specialist integrates high-level directives with low-level primitives executable by a run-time controller for command of autonomous servicing activities. The planner has been designed to address such issues as adaptive plan generation, real-time performance, and operator intervention.

  9. The instant sequencing task: Toward constraint-checking a complex spacecraft command sequence interactively

    NASA Technical Reports Server (NTRS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.

    1993-01-01

    Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.

  10. Single step optimization of manipulator maneuvers with variable structure control

    NASA Technical Reports Server (NTRS)

    Chen, N.; Dwyer, T. A. W., III

    1987-01-01

    One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.

  11. Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping.

    PubMed

    Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L

    2016-03-18

    Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .

  12. Predictive Interfaces for Long-Distance Tele-Operations

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Martin, Rodney; Allan, Mark B.; Sunspiral, Vytas

    2005-01-01

    We address the development of predictive tele-operator interfaces for humanoid robots with respect to two basic challenges. Firstly, we address automating the transition from fully tele-operated systems towards degrees of autonomy. Secondly, we develop compensation for the time-delay that exists when sending telemetry data from a remote operation point to robots located at low earth orbit and beyond. Humanoid robots have a great advantage over other robotic platforms for use in space-based construction and maintenance because they can use the same tools as astronauts do. The major disadvantage is that they are difficult to control due to the large number of degrees of freedom, which makes it difficult to synthesize autonomous behaviors using conventional means. We are working with the NASA Johnson Space Center's Robonaut which is an anthropomorphic robot with fully articulated hands, arms, and neck. We have trained hidden Markov models that make use of the command data, sensory streams, and other relevant data sources to predict a tele-operator's intent. This allows us to achieve subgoal level commanding without the use of predefined command dictionaries, and to create sub-goal autonomy via sequence generation from generative models. Our method works as a means to incrementally transition from manual tele-operation to semi-autonomous, supervised operation. The multi-agent laboratory experiments conducted by Ambrose et. al. have shown that it is feasible to directly tele-operate multiple Robonauts with humans to perform complex tasks such as truss assembly. However, once a time-delay is introduced into the system, the rate of tele\\ioperation slows down to mimic a bump and wait type of activity. We would like to maintain the same interface to the operator despite time-delays. To this end, we are developing an interface which will allow for us to predict the intentions of the operator while interacting with a 3D virtual representation of the expected state of the robot. The predictive interface anticipates the intention of the operator, and then uses this prediction to initiate appropriate sub-goal autonomy tasks.

  13. The Aerosonde Robotic Aircraft: A New Paradigm for Environmental Observations.

    NASA Astrophysics Data System (ADS)

    Holland, G. J.; Webster, P. J.; Curry, J. A.; Tyrell, G.; Gauntlett, D.; Brett, G.; Becker, J.; Hoag, R.; Vaglienti, W.

    2001-05-01

    The Aerosonde is a small robotic aircraft designed for highly flexible and inexpensive operations. Missions are conducted in a completely robotic mode, with the aircraft under the command of a ground controller who monitors the mission. Here we provide an update on the Aerosonde development and operations and expand on the vision for the future, including instrument payloads, observational strategies, and platform capabilities. The aircraft was conceived in 1992 and developed to operational status in 1995-98, after a period of early prototyping. Continuing field operations and development since 1998 have led to the Aerosonde Mark 3, with ~2000 flight hours completed. A defined development path through to 2002 will enable the aircraft to become increasingly more robust with increased flexibility in the range and type of operations that can be achieved. An Aerosonde global reconnaissance facility is being developed that consists of launch and recovery sites dispersed around the globe. The use of satellite communications and internet technology enables an operation in which all aircraft around the globe are under the command of a single center. During operation, users will receive data at their home institution in near-real time via the virtual field environment, allowing the user to update the mission through interaction with the global command center. Sophisticated applications of the Aerosonde will be enabled by the development of a variety of interchangeable instrument payloads and the operation of Smart Aerosonde Clusters that allow a cluster of Aerosondes to interact intelligently in response to the data being collected.

  14. Individual muscle control using an exoskeleton robot for muscle function testing.

    PubMed

    Ueda, Jun; Ming, Ding; Krishnamoorthy, Vijaya; Shinohara, Minoru; Ogasawara, Tsukasa

    2010-08-01

    Healthy individuals modulate muscle activation patterns according to their intended movement and external environment. Persons with neurological disorders (e.g., stroke and spinal cord injury), however, have problems in movement control due primarily to their inability to modulate their muscle activation pattern in an appropriate manner. A functionality test at the level of individual muscles that investigates the activity of a muscle of interest on various motor tasks may enable muscle-level force grading. To date there is no extant work that focuses on the application of exoskeleton robots to induce specific muscle activation in a systematic manner. This paper proposes a new method, named "individual muscle-force control" using a wearable robot (an exoskeleton robot, or a power-assisting device) to obtain a wider variety of muscle activity data than standard motor tasks, e.g., pushing a handle by hand. A computational algorithm systematically computes control commands to a wearable robot so that a desired muscle activation pattern for target muscle forces is induced. It also computes an adequate amount and direction of a force that a subject needs to exert against a handle by his/her hand. This individual muscle control method enables users (e.g., therapists) to efficiently conduct neuromuscular function tests on target muscles by arbitrarily inducing muscle activation patterns. This paper presents a basic concept, mathematical formulation, and solution of the individual muscle-force control and its implementation to a muscle control system with an exoskeleton-type robot for upper extremity. Simulation and experimental results in healthy individuals justify the use of an exoskeleton robot for future muscle function testing in terms of the variety of muscle activity data.

  15. Stability control for high speed tracked unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Pape, Olivier; Morillon, Joel G.; Houbloup, Philippe; Leveque, Stephane; Fialaire, Cecile; Gauthier, Thierry; Ropars, Patrice

    2005-05-01

    The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales as the prime contractor, focuses on about 15 robotic themes which can provide an immediate "operational add-on value". The paper details the "automatic speed adjustment" behavior (named SYR4), developed by Giat Industries Company, which main goal is to secure the teleoperated mobility of high speed tracked vehicles on rough grounds; more precisely, the validated low level behavior continuously adjusts the vehicle speed taking into account the teleperator wish AND the maximum speed that the vehicle can manage safely according to the commanded radius of curvature. The algorithm is based on a realistic physical model of the ground-tracks relation, taking into account many vehicle and ground parameters (such as ground adherence and dynamic specificities of tracked vehicles). It also deals with the teleoperator-machine interface, providing a balanced strategy between both extreme behaviors: a) maximum speed reduction before initiating the commanded curve; b) executing the minimum possible radius without decreasing the commanded speed. The paper presents the results got from the military acceptance tests performed on tracked SYRANO vehicle (French Operational Demonstrator).

  16. Robotic application of a dynamic resultant force vector using real-time load-control: simulation of an ideal follower load on Cadaveric L4-L5 segments.

    PubMed

    Bennett, Charles R; Kelly, Brian P

    2013-08-09

    Standard in-vitro spine testing methods have focused on application of isolated and/or constant load components while the in-vivo spine is subject to multiple components that can be resolved into resultant dynamic load vectors. To advance towards more in-vivo like simulations the objective of the current study was to develop a methodology to apply robotically-controlled, non-zero, real-time dynamic resultant forces during flexion-extension on human lumbar motion segment units (MSU) with initial application towards simulation of an ideal follower load (FL) force vector. A proportional-integral-derivative (PID) controller with custom algorithms coordinated the motion of a Cartesian serial manipulator comprised of six axes each capable of position- or load-control. Six lumbar MSUs (L4-L5) were tested with continuously increasing sagittal plane bending to 8 Nm while force components were dynamically programmed to deliver a resultant 400 N FL that remained normal to the moving midline of the intervertebral disc. Mean absolute load-control tracking errors between commanded and experimental loads were computed. Global spinal ranges of motion and sagittal plane inter-body translations were compared to previously published values for non-robotic applications. Mean TEs for zero-commanded force and moment axes were 0.7 ± 0.4N and 0.03 ± 0.02 Nm, respectively. For non-zero force axes mean TEs were 0.8 ± 0.8 N, 1.3 ± 1.6 Nm, and 1.3 ± 1.6N for Fx, Fz, and the resolved ideal follower load vector FL(R), respectively. Mean extension and flexion ranges of motion were 2.6° ± 1.2° and 5.0° ± 1.7°, respectively. Relative vertebral body translations and rotations were very comparable to data collected with non-robotic systems in the literature. The robotically coordinated Cartesian load controlled testing system demonstrated robust real-time load-control that permitted application of a real-time dynamic non-zero load vector during flexion-extension. For single MSU investigations the methodology has potential to overcome conventional follower load limitations, most notably via application outside the sagittal plane. This methodology holds promise for future work aimed at reducing the gap between current in-vitro testing and in-vivo circumstances. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Robotics control using isolated word recognition of voice input

    NASA Technical Reports Server (NTRS)

    Weiner, J. M.

    1977-01-01

    A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.

  18. Using robots in "Hands-on" academic activities: a case study examining speech-generating device use and required skills.

    PubMed

    Adams, Kim; Cook, Al

    2016-01-01

    A 12-year-old girl, Emily, with complex communication needs and severe physical limitations, controlled a Lego robot from a speech-generating device (SGD) to do various "hands-on" academic activities. Emily's teacher and assistive technology (AT) team thought that controlling a robot would motivate Emily to "use her SGD more". A descriptive case study was used because the integration of communication and manipulation technologies is not yet understood. Target activities and goals were chosen by Emily's teacher and AT team. Emily performed several manipulative math activities and engaged in an "acting" activity aimed at increasing her message length. The competency skills needed to control a robot from the SGD were examined, as well as stakeholder satisfaction with the robot system. Emily generated up to 0.4 communication events and 7 robot commands per minute in the activities. Her length of utterance was usually one-word long, but she generated two- and three-word utterances during some activities. Observations of Emily informed a framework to describe the competency skills needed to use SGDs to control robots. Emily and her teacher expressed satisfaction with robot use. Robot use could motivate students to build SGD operational skills and learn educational concepts. Implications for Rehabilitation Controlling a robot from a speech-generating device (SGD) could increase students' motivation, engagement and understanding in learning educational concepts, because of the hands-on enactive approach. The robot and SGD system was acceptable to the participant and teacher and elicited positive comments from classmates. Thus, it may provide a way for children with disabilities to link with the curriculum and with other students in the classroom. Controlling a robot via SGD presents opportunities to improve augmentative and alternative communication operational, linguistic, social and strategic skills. Careful choice of activities will ensure that the activity requirements focus on the desired target skill, e.g. drawing or playing board games could be helpful to build operational skills and acting out stories could be helpful for building linguistic skills.

  19. Science Autonomy in Robotic Exploration

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; DeVincenzi, Donald (Technical Monitor)

    2001-01-01

    Historical mission operations have involved: (1) return of scientific data; (2) evaluation of these data by scientists; (3) recommendations for future mission activity by scientists; (4) commands for these transmitted to the craft; and (5) the activity being undertaken. This cycle is repeated throughout the mission with command opportunities once or twice per day. For a rover, this historical cycle is not amenable to rapid long range traverses or rapid response to any novel or unexpected situations. In addition to real-time response issues, imaging and/or spectroscopic devices can produce tremendous data volumes during a traverse. However, such data volumes can rapidly exceed on-board memory capabilities prior to the ability to transmit it to Earth. Additionally, the necessary communication band-widths are restrictive enough so that only a small portion of these data can actually be returned to Earth. Such scenarios suggest enabling some science decisions to be made on-board the robots. These decisions involve automating various aspects of scientific discovery instead of the electromechanical control, health, and navigation issues associated with robotic operations. The robot retains access to the full data fidelity obtained by its scientific sensors, and is in the best position to implement actions based upon these data. Such an approach would eventually enable the robot to alter observations and assure only the highest quality data is obtained for analysis. Additionally, the robot can begin to understand what is scientifically interesting and implement alternative observing sequences, because the observed data deviate from expectations based upon current theories/models of planetary processes. Such interesting data and/or conclusions can then be prioritized and selectively transmitted to Earth; reducing memory and communications demands. Results of Ames' current work in this area will be presented.

  20. Robotic Exploration: The Role of Science Autonomy

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; DeVincenzi, Donald (Technical Monitor)

    2001-01-01

    Historical mission operations have involved: (1) return of scientific data; (2) evaluation of these data by scientists; (3) recommendations for future mission activity by scientists; (4) commands for these transmitted to the craft; and (5) the activity being, undertaken. This cycle is repeated throughout the mission with command opportunities once or twice per day. For a rover, this historical cycle is not amenable to rapid long range traverses or rapid response to any novel or unexpected situations. In addition to real-time response issues, imaging and/or spectroscopic devices can produce tremendous data volumes during a traverse. However, such data volumes can rapidly exceed on-board memory capabilities prior to the ability to transmit it to Earth. Additionally, the necessary communication band-widths are restrictive enough so that only a small portion of these data can actually be returned to Earth. Such scenarios suggest enabling some science decisions to be made on-board the robots. These decisions involve automating various aspects of scientific discovery instead of the electromechanical control, health, and navigation issues associated with robotic operations. The robot retains access to the full data fidelity obtained by its scientific sensors, and is in the best position to implement actions based upon these data. Such an approach would eventually enable the robot to alter observations and assure only the highest quality data is obtained for analysis. Additionally, the robot can begin to understand what is scientifically interesting and implement alternative observing sequences, because the observed data deviate from expectations based upon current theories/models of planetary processes. Such interesting data and/or conclusions can then be prioritized and selectively transmitted to Earth; reducing memory and communications demands. Results of Ames' current work in this area will be presented.

  1. Frick, Melvin and Love in the U.S. Lab

    NASA Image and Video Library

    2008-02-13

    S122-E-008251 (13 Feb. 2008) --- Astronauts Steve Frick (top left), STS-122 commander; Leland Melvin (bottom) and Stanley Love, both mission specialists, take a moment for a photo while working the controls of the station's robotic Canadarm2 in the Destiny laboratory of the International Space Station while Space Shuttle Atlantis is docked with the station.

  2. From the laboratory to the soldier: providing tactical behaviors for Army robots

    NASA Astrophysics Data System (ADS)

    Knichel, David G.; Bruemmer, David J.

    2008-04-01

    The Army Future Combat System (FCS) Operational Requirement Document has identified a number of advanced robot tactical behavior requirements to enable the Future Brigade Combat Team (FBCT). The FBCT advanced tactical behaviors include Sentinel Behavior, Obstacle Avoidance Behavior, and Scaled Levels of Human-Machine control Behavior. The U.S. Army Training and Doctrine Command, (TRADOC) Maneuver Support Center (MANSCEN) has also documented a number of robotic behavior requirements for the Army non FCS forces such as the Infantry Brigade Combat Team (IBCT), Stryker Brigade Combat Team (SBCT), and Heavy Brigade Combat Team (HBCT). The general categories of useful robot tactical behaviors include Ground/Air Mobility behaviors, Tactical Mission behaviors, Manned-Unmanned Teaming behaviors, and Soldier-Robot Interface behaviors. Many DoD research and development centers are achieving the necessary components necessary for artificial tactical behaviors for ground and air robots to include the Army Research Laboratory (ARL), U.S. Army Research, Development and Engineering Command (RDECOM), Space and Naval Warfare (SPAWAR) Systems Center, US Army Tank-Automotive Research, Development and Engineering Center (TARDEC) and non DoD labs such as Department of Energy (DOL). With the support of the Joint Ground Robotics Enterprise (JGRE) through DoD and non DoD labs the Army Maneuver Support Center has recently concluded successful field trails of ground and air robots with specialized tactical behaviors and sensors to enable semi autonomous detection, reporting, and marking of explosive hazards to include Improvised Explosive Devices (IED) and landmines. A specific goal of this effort was to assess how collaborative behaviors for multiple unmanned air and ground vehicles can reduce risks to Soldiers and increase efficiency for on and off route explosive hazard detection, reporting, and marking. This paper discusses experimental results achieved with a robotic countermine system that utilizes autonomous behaviors and a mixed-initiative control scheme to address the challenges of detecting and marking buried landmines. Emerging requirements for robotic countermine operations are outlined as are the technologies developed under this effort to address them. A first experiment shows that the resulting system was able to find and mark landmines with a very low level of human involvement. In addition, the data indicates that the robotic system is able to decrease the time to find mines and increase the detection accuracy and reliability. Finally, the paper presents current efforts to incorporate new countermine sensors and port the resulting behaviors to two fielded military systems for rigorous assessing.

  3. Polymorphic robotic system controlled by an observing camera

    NASA Astrophysics Data System (ADS)

    Koçer, Bilge; Yüksel, Tugçe; Yümer, M. Ersin; Özen, C. Alper; Yaman, Ulas

    2010-02-01

    Polymorphic robotic systems, which are composed of many modular robots that act in coordination to achieve a goal defined on the system level, have been drawing attention of industrial and research communities since they bring additional flexibility in many applications. This paper introduces a new polymorphic robotic system, in which the detection and control of the modules are attained by a stationary observing camera. The modules do not have any sensory equipment for positioning or detecting each other. They are self-powered, geared with means of wireless communication and locking mechanisms, and are marked to enable the image processing algorithm detect the position and orientation of each of them in a two dimensional space. Since the system does not depend on the modules for positioning and commanding others, in a circumstance where one or more of the modules malfunction, the system will be able to continue operating with the rest of the modules. Moreover, to enhance the compatibility and robustness of the system under different illumination conditions, stationary reference markers are employed together with global positioning markers, and an adaptive filtering parameter decision methodology is enclosed. To the best of authors' knowledge, this is the first study to introduce a remote camera observer to control modules of a polymorphic robotic system.

  4. Field experiments using SPEAR: a speech control system for UGVs

    NASA Astrophysics Data System (ADS)

    Chhatpar, Siddharth R.; Blanco, Chris; Czerniak, Jeffrey; Hoffman, Orin; Juneja, Amit; Pruthi, Tarun; Liu, Dongqing; Karlsen, Robert; Brown, Jonathan

    2009-05-01

    This paper reports on a Field Experiment carried out by the Human Research and Engineering Directorate at Ft. Benning to evaluate the efficacy of using speech to control an Unmanned Ground Vehicle (UGV) concurrently with a handcontroller. The SPEAR system, developed by Think-A-Move, provides speech-control of UGVs. The system picks up user-speech in the ear canal with an in-ear microphone. This property allows it to work efficiently in high-noise environments, where traditional speech systems, employing external microphones, fail. It has been integrated with an iRobot PackBot 510 with EOD kit. The integrated system allows the hand-controller to be supplemented with speech for concurrent control. At Ft. Benning, the integrated system was tested by soldiers from the Officer Candidate School. The Experiment had dual focus: 1) Quantitative measurement of the time taken to complete each station and the cognitive load on users; 2) Qualitative evaluation of ease-of-use and ergonomics through soldier-feedback. Also of significant benefit to Think-A-Move was soldier-feedback on the speech-command vocabulary employed: What spoken commands are intuitive, and how the commands should be executed, e.g., limited-motion vs. unlimited-motion commands. Overall results from the Experiment are reported in the paper.

  5. Space environments and their effects on space automation and robotics

    NASA Technical Reports Server (NTRS)

    Garrett, Henry B.

    1990-01-01

    Automated and robotic systems will be exposed to a variety of environmental anomalies as a result of adverse interactions with the space environment. As an example, the coupling of electrical transients into control systems, due to EMI from plasma interactions and solar array arcing, may cause spurious commands that could be difficult to detect and correct in time to prevent damage during critical operations. Spacecraft glow and space debris could introduce false imaging information into optical sensor systems. The presentation provides a brief overview of the primary environments (plasma, neutral atmosphere, magnetic and electric fields, and solid particulates) that cause such adverse interactions. The descriptions, while brief, are intended to provide a basis for the other papers presented at this conference which detail the key interactions with automated and robotic systems. Given the growing complexity and sensitivity of automated and robotic space systems, an understanding of adverse space environments will be crucial to mitigating their effects.

  6. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

    PubMed Central

    Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng

    2017-01-01

    Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123

  7. Biometrically modulated collaborative control for an assistive wheelchair.

    PubMed

    Urdiales, Cristina; Fernandez-Espejo, Blanca; Annicchiaricco, Roberta; Sandoval, Francisco; Caltagirone, Carlo

    2010-08-01

    To operate a wheelchair, people with severe physical disabilities may require assistance, which can be provided by robotization. However, medical experts report that an excess of assistance may lead to loss of residual skills, so that it is important to provide just the right amount of assistance. This work proposes a collaborative control system based on weighting the robot's and the user's commands by their respective efficiency to reactively obtain an emergent controller. Thus, the better the person operates, the more control he/she gains. Tests with volunteers have proven, though, that some users may require extra assistance when they become stressed. Hence, we propose a controller that can change the amount of support taking into account supplementary biometric data. In this work, we use an off-the-shelf wearable pulse oximeter. Experiments have demonstrated that volunteers could use our wheelchair in a more efficient way due to the proposed biometric modulated collaborative control.

  8. Illusory movement perception improves motor control for prosthetic hands

    PubMed Central

    Marasco, Paul D.; Hebert, Jacqueline S.; Sensinger, Jon W.; Shell, Courtney E.; Schofield, Jonathon S.; Thumser, Zachary C.; Nataraj, Raviraj; Beckler, Dylan T.; Dawson, Michael R.; Blustein, Dan H.; Gill, Satinder; Mensh, Brett D.; Granja-Vazquez, Rafael; Newcomb, Madeline D.; Carey, Jason P.; Orzell, Beth M.

    2018-01-01

    To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement’s progress. This largely non-conscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. Here we report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. PMID:29540617

  9. SWARMs Ontology: A Common Information Model for the Cooperation of Underwater Robots

    PubMed Central

    Li, Xin; Bilbao, Sonia; Martín-Wanton, Tamara; Bastos, Joaquim; Rodriguez, Jonathan

    2017-01-01

    In order to facilitate cooperation between underwater robots, it is a must for robots to exchange information with unambiguous meaning. However, heterogeneity, existing in information pertaining to different robots, is a major obstruction. Therefore, this paper presents a networked ontology, named the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) ontology, to address information heterogeneity and enable robots to have the same understanding of exchanged information. The SWARMs ontology uses a core ontology to interrelate a set of domain-specific ontologies, including the mission and planning, the robotic vehicle, the communication and networking, and the environment recognition and sensing ontology. In addition, the SWARMs ontology utilizes ontology constructs defined in the PR-OWL ontology to annotate context uncertainty based on the Multi-Entity Bayesian Network (MEBN) theory. Thus, the SWARMs ontology can provide both a formal specification for information that is necessarily exchanged between robots and a command and control entity, and also support for uncertainty reasoning. A scenario on chemical pollution monitoring is described and used to showcase how the SWARMs ontology can be instantiated, be extended, represent context uncertainty, and support uncertainty reasoning. PMID:28287468

  10. Designing speech-based interfaces for telepresence robots for people with disabilities.

    PubMed

    Tsui, Katherine M; Flynn, Kelsey; McHugh, Amelia; Yanco, Holly A; Kontak, David

    2013-06-01

    People with cognitive and/or motor impairments may benefit from using telepresence robots to engage in social activities. To date, these robots, their user interfaces, and their navigation behaviors have not been designed for operation by people with disabilities. We conducted an experiment in which participants (n=12) used a telepresence robot in a scavenger hunt task to determine how they would use speech to command the robot. Based upon the results, we present design guidelines for speech-based interfaces for telepresence robots.

  11. Sandia National Laboratories proof-of-concept robotic security vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, J.J.; Jones, D.P.; Klarer, P.R.

    1989-01-01

    Several years ago Sandia National Laboratories developed a prototype interior robot that could navigate autonomously inside a large complex building to air and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modified andmore » integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities. 2 refs., 3 figs.« less

  12. Model reference adaptive control of robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo

    1991-01-01

    This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.

  13. A simple, inexpensive, and effective implementation of a vision-guided autonomous robot

    NASA Astrophysics Data System (ADS)

    Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James

    2006-10-01

    This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.

  14. Mobile app for human-interaction with sitter robots

    NASA Astrophysics Data System (ADS)

    Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.

    2017-05-01

    Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that enable a patient sitter HMI, and we include experimental results with a small number of users that demonstrate that the concept is sound and scalable.

  15. A Survey of Robotic Technology.

    DTIC Science & Technology

    1983-07-01

    developed the following definition of a robot: A robot is a reprogrammable multifunctional manipulator designed to move material, parts, tools, or specialized...subroutines subroutines commands to specific actuators, computations based on sensor data, etc. For instance, the job might be to assemble an automobile ...the set-up developed at Draper Labs to enable a robot to assemble an automobile alternator. The assembly operation is impressive to watch. The number

  16. The AST3 controlling and operating software suite for automatic sky survey

    NASA Astrophysics Data System (ADS)

    Hu, Yi; Shang, Zhaohui; Ma, Bin; Hu, Keliang

    2016-07-01

    We have developed a specialized software package, called ast3suite, to achieve the remote control and automatic sky survey for AST3 (Antarctic Survey Telescope) from scratch. It includes several daemon servers and many basic commands. Each program does only one single task, and they work together to make AST3 a robotic telescope. A survey script calls basic commands to carry out automatic sky survey. Ast3suite was carefully tested in Mohe, China in 2013 and has been used at Dome, Antarctica in 2015 and 2016 with the real hardware for practical sky survey. Both test results and practical using showed that ast3suite had worked very well without any manual auxiliary as we expected.

  17. Posture Control-Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses.

    PubMed

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with "reactive" balancing of external disturbances and "proactive" balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot.

  18. Soldier experiments and assessments using SPEAR speech control system for UGVs

    NASA Astrophysics Data System (ADS)

    Brown, Jonathan; Blanco, Chris; Czerniak, Jeffrey; Hoffman, Brian; Hoffman, Orin; Juneja, Amit; Ngia, Lester; Pruthi, Tarun; Liu, Dongqing

    2010-04-01

    This paper reports on a Soldier Experiment performed by the Army Research Lab's Human Research Engineering Directorate (HRED) Field Element located at the Maneuver Center of Excellence, Ft. Benning, and a Limited Use Assessment conducted by the Marine Corps Forces Pacific Command Experimentation Center (MEC) at Camp Pendleton evaluating the effectiveness of using speech commands to control an Unmanned Ground Vehicle. SPEAR, developed by Think-A-Move, Ltd., provides speech control of UGVs. SPEAR detects user speech in the ear canal with an earpiece containing an in-ear microphone. The system design provides up to 30 dB of passive noise reduction, enabling it to work well in high-noise environments, where traditional speech systems, using external microphones, fail; it also utilizes a proprietary speech recognition engine. SPEAR has been integrated with iRobot's PackBot 510 with FasTac Kit, and with Multi-Robot Operator Control Unit (MOCU), developed by SPAWAR Systems Center Pacific. These integrated systems allow speech to supplement the hand-controller for multi-modal control of different UGV functions simultaneously. HRED's experiment measured the impact of SPEAR on reducing the cognitive load placed on UGV Operators and the time to complete specific tasks. Army NCOs and Officer School Candidates participated in this experiment, which found that speech control was faster than manual control to complete tasks requiring menu navigation, as well as reducing the cognitive load on UGV Operators. The MEC assessment examined speech commands used for two different missions: Route Clearance and Cordon and Search; participants included Explosive Ordnance Disposal Technicians and Combat Engineers. The majority of the Marines thought it was easier to complete the mission scenarios with SPEAR than with only using manual controls, and that using SPEAR improved their situational awareness. Overall results of these Assessments are reported in the paper, along with possible applications to autonomous mine detection systems.

  19. University of Maryland walking robot: A design project for undergraduate students

    NASA Technical Reports Server (NTRS)

    Olsen, Bob; Bielec, Jim; Hartsig, Dave; Oliva, Mani; Grotheer, Phil; Hekmat, Morad; Russell, David; Tavakoli, Hossein; Young, Gary; Nave, Tom

    1990-01-01

    The design and construction required that the walking robot machine be capable of completing a number of tasks including walking in a straight line, turning to change direction, and maneuvering over an obstable such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear-box and crank-arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating the machine about this support. The machine can be controlled by using either a user operated remote tether or the on-board computer for the execution of control commands. Absolute encoders are attached to all motors (leg, main drive, and Bigfoot) to provide the control computer with information regarding the status of the motors (up-down motion, forward or reverse rotation). Long and short range infrared sensors provide the computer with feedback information regarding the machine's relative position to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.

  20. Utilizing Glove-Based Gestures and a Tactile Vest Display for Covert Communications and Robot Control

    DTIC Science & Technology

    2014-06-01

    transmitted from a controller mechanism that contains inertial measurement unit ( IMU ) sensors to sense rotation and acceleration of movement. Earlier...assets, and standard hand signal commands can be presented to human team members via a variety of modalities. IMU sensor technologies placed on the body...obstacle event (e.g., climbing, crawling, combat roll , running) and between obstacles (i.e., walking). The following analyses are for each task

  1. Design and validation of an intelligent wheelchair towards a clinically-functional outcome.

    PubMed

    Boucher, Patrice; Atrash, Amin; Kelouwani, Sousso; Honoré, Wormser; Nguyen, Hai; Villemure, Julien; Routhier, François; Cohen, Paul; Demers, Louise; Forget, Robert; Pineau, Joelle

    2013-06-17

    Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode.

  2. Motor-Skill Learning in an Insect Inspired Neuro-Computational Control System

    PubMed Central

    Arena, Eleonora; Arena, Paolo; Strauss, Roland; Patané, Luca

    2017-01-01

    In nature, insects show impressive adaptation and learning capabilities. The proposed computational model takes inspiration from specific structures of the insect brain: after proposing key hypotheses on the direct involvement of the mushroom bodies (MBs) and on their neural organization, we developed a new architecture for motor learning to be applied in insect-like walking robots. The proposed model is a nonlinear control system based on spiking neurons. MBs are modeled as a nonlinear recurrent spiking neural network (SNN) with novel characteristics, able to memorize time evolutions of key parameters of the neural motor controller, so that existing motor primitives can be improved. The adopted control scheme enables the structure to efficiently cope with goal-oriented behavioral motor tasks. Here, a six-legged structure, showing a steady-state exponentially stable locomotion pattern, is exposed to the need of learning new motor skills: moving through the environment, the structure is able to modulate motor commands and implements an obstacle climbing procedure. Experimental results on a simulated hexapod robot are reported; they are obtained in a dynamic simulation environment and the robot mimicks the structures of Drosophila melanogaster. PMID:28337138

  3. Combining a hybrid robotic system with a bain-machine interface for the rehabilitation of reaching movements: A case study with a stroke patient.

    PubMed

    Resquin, F; Ibañez, J; Gonzalez-Vargas, J; Brunetti, F; Dimbwadyo, I; Alves, S; Carrasco, L; Torres, L; Pons, Jose Luis

    2016-08-01

    Reaching and grasping are two of the most affected functions after stroke. Hybrid rehabilitation systems combining Functional Electrical Stimulation with Robotic devices have been proposed in the literature to improve rehabilitation outcomes. In this work, we present the combined use of a hybrid robotic system with an EEG-based Brain-Machine Interface to detect the user's movement intentions to trigger the assistance. The platform has been tested in a single session with a stroke patient. The results show how the patient could successfully interact with the BMI and command the assistance of the hybrid system with low latencies. Also, the Feedback Error Learning controller implemented in this system could adjust the required FES intensity to perform the task.

  4. Robotic assembly and maintenance of future space stations based on the ISS mission operations experience

    NASA Astrophysics Data System (ADS)

    Rembala, Richard; Ower, Cameron

    2009-10-01

    MDA has provided 25 years of real-time engineering support to Shuttle (Canadarm) and ISS (Canadarm2) robotic operations beginning with the second shuttle flight STS-2 in 1981. In this capacity, our engineering support teams have become familiar with the evolution of mission planning and flight support practices for robotic assembly and support operations at mission control. This paper presents observations on existing practices and ideas to achieve reduced operational overhead to present programs. It also identifies areas where robotic assembly and maintenance of future space stations and space-based facilities could be accomplished more effectively and efficiently. Specifically, our experience shows that past and current space Shuttle and ISS assembly and maintenance operations have used the approach of extensive preflight mission planning and training to prepare the flight crews for the entire mission. This has been driven by the overall communication latency between the earth and remote location of the space station/vehicle as well as the lack of consistent robotic and interface standards. While the early Shuttle and ISS architectures included robotics, their eventual benefits on the overall assembly and maintenance operations could have been greater through incorporating them as a major design driver from the beginning of the system design. Lessons learned from the ISS highlight the potential benefits of real-time health monitoring systems, consistent standards for robotic interfaces and procedures and automated script-driven ground control in future space station assembly and logistics architectures. In addition, advances in computer vision systems and remote operation, supervised autonomous command and control systems offer the potential to adjust the balance between assembly and maintenance tasks performed using extra vehicular activity (EVA), extra vehicular robotics (EVR) and EVR controlled from the ground, offloading the EVA astronaut and even the robotic operator on-orbit of some of the more routine tasks. Overall these proposed approaches when used effectively offer the potential to drive down operations overhead and allow more efficient and productive robotic operations.

  5. Analyzing Cyber-Physical Threats on Robotic Platforms.

    PubMed

    Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J

    2018-05-21

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.

  6. Analyzing Cyber-Physical Threats on Robotic Platforms †

    PubMed Central

    2018-01-01

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403

  7. Application of model reference adaptive control to a flexible remote manipulator arm

    NASA Technical Reports Server (NTRS)

    Meldrum, D. R.; Balas, M. J.

    1986-01-01

    An exact modal state-space representation is derived in detail for a single-link, flexible remote manipulator with a noncollocated sensor and actuator. A direct model following adaptive controller is designed to control the torque at the pinned end of the arm so as to command the free end to track a prescribed sinusoidal motion. Conditions that must be satisfied in order for the controller to work are stated. Simulation results to date are discussed along with the potential of the model following adaptive control scheme in robotics and space environments.

  8. Posture Control—Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses

    PubMed Central

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot. PMID:29867428

  9. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012573 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  10. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012571 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  11. Extraction of user's navigation commands from upper body force interaction in walker assisted gait.

    PubMed

    Frizera Neto, Anselmo; Gallego, Juan A; Rocon, Eduardo; Pons, José L; Ceres, Ramón

    2010-08-05

    The advances in technology make possible the incorporation of sensors and actuators in rollators, building safer robots and extending the use of walkers to a more diverse population. This paper presents a new method for the extraction of navigation related components from upper-body force interaction data in walker assisted gait. A filtering architecture is designed to cancel: (i) the high-frequency noise caused by vibrations on the walker's structure due to irregularities on the terrain or walker's wheels and (ii) the cadence related force components caused by user's trunk oscillations during gait. As a result, a third component related to user's navigation commands is distinguished. For the cancelation of high-frequency noise, a Benedict-Bordner g-h filter was designed presenting very low values for Kinematic Tracking Error ((2.035 +/- 0.358).10(-2) kgf) and delay ((1.897 +/- 0.3697).10(1)ms). A Fourier Linear Combiner filtering architecture was implemented for the adaptive attenuation of about 80% of the cadence related components' energy from force data. This was done without compromising the information contained in the frequencies close to such notch filters. The presented methodology offers an effective cancelation of the undesired components from force data, allowing the system to extract in real-time voluntary user's navigation commands. Based on this real-time identification of voluntary user's commands, a classical approach to the control architecture of the robotic walker is being developed, in order to obtain stable and safe user assisted locomotion.

  12. Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot

    PubMed Central

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2016-01-01

    Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot’s body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI’s potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject’s own body. PMID:27654174

  13. A multimodal interface for real-time soldier-robot teaming

    NASA Astrophysics Data System (ADS)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  14. Evolving technologies for Space Station Freedom computer-based workstations

    NASA Technical Reports Server (NTRS)

    Jensen, Dean G.; Rudisill, Marianne

    1990-01-01

    Viewgraphs on evolving technologies for Space Station Freedom computer-based workstations are presented. The human-computer computer software environment modules are described. The following topics are addressed: command and control workstation concept; cupola workstation concept; Japanese experiment module RMS workstation concept; remote devices controlled from workstations; orbital maneuvering vehicle free flyer; remote manipulator system; Japanese experiment module exposed facility; Japanese experiment module small fine arm; flight telerobotic servicer; human-computer interaction; and workstation/robotics related activities.

  15. The Evolution of Three Dimensional Visualization for Commanding the Mars Rovers

    NASA Technical Reports Server (NTRS)

    Hartman, Frank R.; Wright, John; Cooper, Brian

    2014-01-01

    NASA's Jet Propulsion Laboratory has built and operated four rovers on the surface of Mars. Two and three dimensional visualization has been extensively employed to command both the mobility and robotic arm operations of these rovers. Stereo visualization has been an important component in this set of visualization techniques. This paper discusses the progression of the implementation and use of visualization techniques for in-situ operations of these robotic missions. Illustrative examples will be drawn from the results of using these techniques over more than ten years of surface operations on Mars.

  16. Towards an SEMG-based tele-operated robot for masticatory rehabilitation.

    PubMed

    Kalani, Hadi; Moghimi, Sahar; Akbarzadeh, Alireza

    2016-08-01

    This paper proposes a real-time trajectory generation for a masticatory rehabilitation robot based on surface electromyography (SEMG) signals. We used two Gough-Stewart robots. The first robot was used as a rehabilitation robot while the second robot was developed to model the human jaw system. The legs of the rehabilitation robot were controlled by the SEMG signals of a tele-operator to reproduce the masticatory motion in the human jaw, supposedly mounted on the moving platform, through predicting the location of a reference point. Actual jaw motions and the SEMG signals from the masticatory muscles were recorded and used as output and input, respectively. Three different methods, namely time-delayed neural networks, time delayed fast orthogonal search, and time-delayed Laguerre expansion technique, were employed and compared to predict the kinematic parameters. The optimal model structures as well as the input delays were obtained for each model and each subject through a genetic algorithm. Equations of motion were obtained by the virtual work method. Fuzzy method was employed to develop a fuzzy impedance controller. Moreover, a jaw model was developed to demonstrate the time-varying behavior of the muscle lengths during the rehabilitation process. The three modeling methods were capable of providing reasonably accurate estimations of the kinematic parameters, although the accuracy and training/validation speed of time-delayed fast orthogonal search were higher than those of the other two aforementioned methods. Also, during a simulation study, the fuzzy impedance scheme proved successful in controlling the moving platform for the accurate navigation of the reference point in the desired trajectory. SEMG has been widely used as a control command for prostheses and exoskeleton robots. However, in the current study by employing the proposed rehabilitation robot the complete continuous profile of the clenching motion was reproduced in the sagittal plane. Copyright © 2016. Published by Elsevier Ltd.

  17. DEVS representation of dynamical systems - Event-based intelligent control. [Discrete Event System Specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.

    1989-01-01

    It is shown how systems can be advantageously represented as discrete-event models by using DEVS (discrete-event system specification), a set-theoretic formalism. Such DEVS models provide a basis for the design of event-based logic control. In this control paradigm, the controller expects to receive confirming sensor responses to its control commands within definite time windows determined by its DEVS model of the system under control. The event-based contral paradigm is applied in advanced robotic and intelligent automation, showing how classical process control can be readily interfaced with rule-based symbolic reasoning systems.

  18. Automated constraint checking of spacecraft command sequences

    NASA Astrophysics Data System (ADS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Spitale, Joseph M.; Le, Dang

    1995-01-01

    Robotic spacecraft are controlled by onboard sets of commands called "sequences." Determining that sequences will have the desired effect on the spacecraft can be expensive in terms of both labor and computer coding time, with different particular costs for different types of spacecraft. Specification languages and appropriate user interface to the languages can be used to make the most effective use of engineering validation time. This paper describes one specification and verification environment ("SAVE") designed for validating that command sequences have not violated any flight rules. This SAVE system was subsequently adapted for flight use on the TOPEX/Poseidon spacecraft. The relationship of this work to rule-based artificial intelligence and to other specification techniques is discussed, as well as the issues that arise in the transfer of technology from a research prototype to a full flight system.

  19. Model reference adaptive control of flexible robots in the presence of sudden load changes

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory

    1991-01-01

    Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.

  20. Vulnerability Analysis of the Player Command and Control Protocol

    DTIC Science & Technology

    2012-06-14

    client plug-ins currentl:v only exist for C++, . Java . and Python. Player is designed a.’:i a. client-server architecture in which robots running Player...I 0.. double -1.0272 angular velocity ;.... ~ va t- <ll ~ 38 2D 73 65 * :>, ""’::!’ ell I 0, +’ 00 00 00 01 uint8 t state 1 motor state v .S 2.5

  1. Stretchable, Flexible, Scalable Smart Skin Sensors for Robotic Position and Force Estimation.

    PubMed

    O'Neill, John; Lu, Jason; Dockter, Rodney; Kowalewski, Timothy

    2018-03-23

    The design and validation of a continuously stretchable and flexible skin sensor for collaborative robotic applications is outlined. The skin consists of a PDMS skin doped with Carbon Nanotubes and the addition of conductive fabric, connected by only five wires to a simple microcontroller. The accuracy is characterized in position as well as force, and the skin is also tested under uniaxial stretch. There are also two examples of practical implementations in collaborative robotic applications. The stationary position estimate has an RMSE of 7.02 mm, and the sensor error stays within 2.5 ± 1.5 mm even under stretch. The skin consistently provides an emergency stop command at only 0.5 N of force and is shown to maintain a collaboration force of 10 N in a collaborative control experiment.

  2. Force reflecting hand controller for manipulator teleoperation

    NASA Technical Reports Server (NTRS)

    Bryfogle, Mark D.

    1991-01-01

    A force reflecting hand controller based upon a six degree of freedom fully parallel mechanism, often termed a Stewart Platform, has been designed, constructed, and tested as an integrated system with a slave robot manipulator test bed. A force reflecting hand controller comprises a kinesthetic device capable of transmitting position and orientation commands to a slave robot manipulator while simultaneously representing the environmental interaction forces of the slave manipulator back to the operator through actuators driving the hand controller mechanism. The Stewart Platform was chosen as a novel approach to improve force reflecting teleoperation because of its inherently high ratio of load generation capability to system mass content and the correspondingly high dynamic bandwidth. An additional novelty of the program was to implement closed loop force and torque control about the hand controller mechanism by equipping the handgrip with a six degree of freedom force and torque measuring cell. The mechanical, electrical, computer, and control systems are discussed and system tests are presented.

  3. Mission simulation as an approach to develop requirements for automation in Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Eckelkamp, R. E.; Barta, D. J.; Dragg, J.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    This paper examines mission simulation as an approach to develop requirements for automation and robotics for Advanced Life Support Systems (ALSS). The focus is on requirements and applications for command and control, control and monitoring, situation assessment and response, diagnosis and recovery, adaptive planning and scheduling, and other automation applications in addition to mechanized equipment and robotics applications to reduce the excessive human labor requirements to operate and maintain an ALSS. Based on principles of systems engineering, an approach is proposed to assess requirements for automation and robotics using mission simulation tools. First, the story of a simulated mission is defined in terms of processes with attendant types of resources needed, including options for use of automation and robotic systems. Next, systems dynamics models are used in simulation to reveal the implications for selected resource allocation schemes in terms of resources required to complete operational tasks. The simulations not only help establish ALSS design criteria, but also may offer guidance to ALSS research efforts by identifying gaps in knowledge about procedures and/or biophysical processes. Simulations of a planned one-year mission with 4 crewmembers in a Human Rated Test Facility are presented as an approach to evaluation of mission feasibility and definition of automation and robotics requirements.

  4. Combined analysis of cortical (EEG) and nerve stump signals improves robotic hand control.

    PubMed

    Tombini, Mario; Rigosa, Jacopo; Zappasodi, Filippo; Porcaro, Camillo; Citi, Luca; Carpaneto, Jacopo; Rossini, Paolo Maria; Micera, Silvestro

    2012-01-01

    Interfacing an amputee's upper-extremity stump nerves to control a robotic hand requires training of the individual and algorithms to process interactions between cortical and peripheral signals. To evaluate for the first time whether EEG-driven analysis of peripheral neural signals as an amputee practices could improve the classification of motor commands. Four thin-film longitudinal intrafascicular electrodes (tf-LIFEs-4) were implanted in the median and ulnar nerves of the stump in the distal upper arm for 4 weeks. Artificial intelligence classifiers were implemented to analyze LIFE signals recorded while the participant tried to perform 3 different hand and finger movements as pictures representing these tasks were randomly presented on a screen. In the final week, the participant was trained to perform the same movements with a robotic hand prosthesis through modulation of tf-LIFE-4 signals. To improve the classification performance, an event-related desynchronization/synchronization (ERD/ERS) procedure was applied to EEG data to identify the exact timing of each motor command. Real-time control of neural (motor) output was achieved by the participant. By focusing electroneurographic (ENG) signal analysis in an EEG-driven time window, movement classification performance improved. After training, the participant regained normal modulation of background rhythms for movement preparation (α/β band desynchronization) in the sensorimotor area contralateral to the missing limb. Moreover, coherence analysis found a restored α band synchronization of Rolandic area with frontal and parietal ipsilateral regions, similar to that observed in the opposite hemisphere for movement of the intact hand. Of note, phantom limb pain (PLP) resolved for several months. Combining information from both cortical (EEG) and stump nerve (ENG) signals improved the classification performance compared with tf-LIFE signals processing alone; training led to cortical reorganization and mitigation of PLP.

  5. Preliminary results on noncollocated torque control of space robot actuators

    NASA Technical Reports Server (NTRS)

    Tilley, Scott W.; Francis, Colin M.; Emerick, Ken; Hollars, Michael G.

    1989-01-01

    In the Space Station era, more operations will be performed robotically in space in the areas of servicing, assembly, and experiment tending among others. These robots may have various sets of requirements for accuracy, speed, and force generation, but there will be design constraints such as size, mass, and power dissipation limits. For actuation, a leading motor candidate is a dc brushless type, and there are numerous potential drive trains each with its own advantages and disadvantages. This experiment uses a harmonic drive and addresses some inherent limitations, namely its backdriveability and low frequency structural resonances. These effects are controlled and diminished by instrumenting the actuator system with a torque transducer on the output shaft. This noncollocated loop is closed to ensure that the commanded torque is accurately delivered to the manipulator link. The actuator system is modelled and its essential parameters identified. The nonlinear model for simulations will include inertias, gearing, stiction, flexibility, and the effects of output load variations. A linear model is extracted and used for designing the noncollocated torque and position feedback loops. These loops are simulated with the structural frequency encountered in the testbed system. Simulation results are given for various commands in position. The use of torque feedback is demonstrated to yield superior performance in settling time and positioning accuracy. An experimental setup being finished consists of a bench mounted motor and harmonic drive actuator system. A torque transducer and two position encoders, each with sufficient resolution and bandwidth, will provide sensory information. Parameters of the physical system are being identified and matched to analytical predictions. Initial feedback control laws will be incorporated in the bench test equipment and various experiments run to validate the designs. The status of these experiments is given.

  6. Deployment and early experience with remote-presence patient care in a community hospital.

    PubMed

    Petelin, J B; Nelson, M E; Goodman, J

    2007-01-01

    The introduction of the RP6 (InTouch Health, Santa Barbara, CA, USA) remote-presence "robot" appears to offer a useful telemedicine device. The authors describe the deployment and early experience with the RP6 in a community hospital and provided a live demonstration of the system on April 16, 2005 during the Emerging Technologies Session of the 2005 SAGES Meeting in Fort Lauderdale, Florida. The RP6 is a 5-ft 4-in. tall, 215-pound robot that can be remotely controlled from an appropriately configured computer located anywhere on the Internet (i.e., on this planet). The system is composed of a control station (a computer at the central station), a mechanical robot, a wireless network (at the remote facility: the hospital), and a high-speed Internet connection at both the remote (hospital) and central locations. The robot itself houses a rechargeable power supply. Its hardware and software allows communication over the Internet with the central station, interpretation of commands from the central station, and conversion of the commands into mechanical and nonmechanical actions at the remote location, which are communicated back to the central station over the Internet. The RP6 system allows the central party (e.g., physician) to control the movements of the robot itself, see and hear at the remote location (hospital), and be seen and heard at the remote location (hospital) while not physically there. Deployment of the RP6 system at the hospital was accomplished in less than a day. The wireless network at the institution was already in place. The control station setup time ranged from 1 to 4 h and was dependent primarily on the quality of the Internet connection (bandwidth) at the remote locations. Patients who visited with the RP6 on their discharge day could be discharged more than 4 h earlier than with conventional visits, thereby freeing up hospital beds on a busy med-surg floor. Patient visits during "off hours" (nights and weekends) were three times more efficient than conventional visits during these times (20 min per visit vs 40-min round trip travel + 20-min visit). Patients and nursing personnel both expressed tremendous satisfaction with the remote-presence interaction. The authors' early experience suggests a significant benefit to patients, hospitals, and physicians with the use of RP6. The implications for future development are enormous.

  7. The AGINAO Self-Programming Engine

    NASA Astrophysics Data System (ADS)

    Skaba, Wojciech

    2013-01-01

    The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum.

  8. Design of a Teleoperated Needle Steering System for MRI-guided Prostate Interventions

    PubMed Central

    Seifabadi, Reza; Iordachita, Iulian; Fichtinger, Gabor

    2013-01-01

    Accurate needle placement plays a key role in success of prostate biopsy and brachytherapy. During percutaneous interventions, the prostate gland rotates and deforms which may cause significant target displacement. In these cases straight needle trajectory is not sufficient for precise targeting. Although needle spinning and fast insertion may be helpful, they do not entirely resolve the issue. We propose robot-assisted bevel-tip needle steering under MRI guidance as a potential solution to compensate for the target displacement. MRI is chosen for its superior soft tissue contrast in prostate imaging. Due to the confined workspace of the MRI scanner and the requirement for the clinician to be present inside the MRI room during the procedure, we designed a MRI-compatible 2-DOF haptic device to command the needle steering slave robot which operates inside the scanner. The needle steering slave robot was designed to be integrated with a previously developed pneumatically actuated transperineal robot for MRI-guided prostate needle placement. We describe design challenges and present the conceptual design of the master and slave robots and the associated controller. PMID:24649480

  9. Tasking and control of a squad of robotic vehicles

    NASA Astrophysics Data System (ADS)

    Lewis, Christopher L.; Feddema, John T.; Klarer, Paul

    2001-09-01

    Sandia National Laboratories have developed a squad of robotic vehicles as a test-bed for investigating cooperative control strategies. The squad consists of eight RATLER vehicles and a command station. The RATLERs are medium-sized all-electric vehicles containing a PC104 stack for computation, control, and sensing. Three separate RF channels are used for communications; one for video, one for command and control, and one for differential GPS corrections. Using DGPS and IR proximity sensors, the vehicles are capable of autonomously traversing fairly rough terrain. The control station is a PC running Windows NT. A GUI has been developed that allows a single operator to task and monitor all eight vehicles. To date, the following mission capabilities have been demonstrated: 1. Way-Point Navigation, 2. Formation Following, 3. Perimeter Surveillance, 4. Surround and Diversion, and 5. DGPS Leap Frog. This paper describes the system and briefly outlines each mission capability. The DGPS Leap Frog capability is discussed in more detail. This capability is unique in that it demonstrates how cooperation allows the vehicles to accurately navigate beyond the RF communication range. One vehicle stops and uses its corrected GPS position to re-initialize its receiver to become the DGPS correction station for the other vehicles. Error in position accumulates each time a new vehicle takes over the DGPS duties. The accumulation in error is accurately modeled as a random walk phenomenon. This paper demonstrates how useful accuracy can be maintained beyond the vehicle's range.

  10. Reflexive obstacle avoidance for kinematically-redundant manipulators

    NASA Technical Reports Server (NTRS)

    Karlen, James P.; Thompson, Jack M., Jr.; Farrell, James D.; Vold, Havard I.

    1989-01-01

    Dexterous telerobots incorporating 17 or more degrees of freedom operating under coordinated, sensor-driven computer control will play important roles in future space operations. They will also be used on Earth in assignments like fire fighting, construction and battlefield support. A real time, reflexive obstacle avoidance system, seen as a functional requirement for such massively redundant manipulators, was developed using arm-mounted proximity sensors to control manipulator pose. The project involved a review and analysis of alternative proximity sensor technologies for space applications, the development of a general-purpose algorithm for synthesizing sensor inputs, and the implementation of a prototypical system for demonstration and testing. A 7 degree of freedom Robotics Research K-2107HR manipulator was outfitted with ultrasonic proximity sensors as a testbed, and Robotics Research's standard redundant motion control algorithm was modified such that an object detected by sensor arrays located at the elbow effectively applies a force to the manipulator elbow, normal to the axis. The arm is repelled by objects detected by the sensors, causing the robot to steer around objects in the workspace automatically while continuing to move its tool along the commanded path without interruption. The mathematical approach formulated for synthesizing sensor inputs can be employed for redundant robots of any kinematic configuration.

  11. Solar Thermal Utility-Scale Joint Venture Program (USJVP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANCINI,THOMAS R.

    2001-04-01

    Several years ago Sandia National Laboratories developed a prototype interior robot [1] that could navigate autonomously inside a large complex building to aid and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modifiedmore » and integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities.« less

  12. Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís

    2010-01-01

    This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.

  13. A natural-language interface to a mobile robot

    NASA Technical Reports Server (NTRS)

    Michalowski, S.; Crangle, C.; Liang, L.

    1987-01-01

    The present work on robot instructability is based on an ongoing effort to apply modern manipulation technology to serve the needs of the handicapped. The Stanford/VA Robotic Aid is a mobile manipulation system that is being developed to assist severely disabled persons (quadriplegics) in performing simple activities of everyday living in a homelike, unstructured environment. It consists of two major components: a nine degree-of-freedom manipulator and a stationary control console. In the work presented here, only the motions of the Robotic Aid's omnidirectional motion base have been considered, i.e., the six degrees of freedom of the arm and gripper have been ignored. The goal has been to develop some basic software tools for commanding the robot's motions in an enclosed room containing a few objects such as tables, chairs, and rugs. In the present work, the environmental model takes the form of a two-dimensional map with objects represented by polygons. Admittedly, such a highly simplified scheme bears little resemblance to the elaborate cognitive models of reality that are used in normal human discourse. In particular, the polygonal model is given a priori and does not contain any perceptual elements: there is no polygon sensor on board the mobile robot.

  14. Illusory movement perception improves motor control for prosthetic hands.

    PubMed

    Marasco, Paul D; Hebert, Jacqueline S; Sensinger, Jon W; Shell, Courtney E; Schofield, Jonathon S; Thumser, Zachary C; Nataraj, Raviraj; Beckler, Dylan T; Dawson, Michael R; Blustein, Dan H; Gill, Satinder; Mensh, Brett D; Granja-Vazquez, Rafael; Newcomb, Madeline D; Carey, Jason P; Orzell, Beth M

    2018-03-14

    To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement's progress. This largely nonconscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. We report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  15. Brain-Computer Interface application: auditory serial interface to control a two-class motor-imagery-based wheelchair.

    PubMed

    Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier

    2017-05-30

    Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.

  16. Control of robotic assistance using poststroke residual voluntary effort.

    PubMed

    Makowski, Nathaniel S; Knutson, Jayme S; Chae, John; Crago, Patrick E

    2015-03-01

    Poststroke hemiparesis limits the ability to reach, in part due to involuntary muscle co-activation (synergies). Robotic approaches are being developed for both therapeutic benefit and continuous assistance during activities of daily living. Robotic assistance may enable participants to exert less effort, thereby reducing expression of the abnormal co-activation patterns, which could allow participants to reach further. This study evaluated how well participants could perform a reaching task with robotic assistance that was either provided independent of effort in the vertical direction or in the sagittal plane in proportion to voluntary effort estimated from electromyograms (EMG) on the affected side. Participants who could not reach targets without assistance were enabled to reach further with assistance. Constant anti-gravity force assistance that was independent of voluntary effort did not reduce the quality of reach and enabled participants to exert less effort while maintaining different target locations. Force assistance that was proportional to voluntary effort on the affected side enabled participants to exert less effort and could be controlled to successfully reach targets, but participants had increased difficulty maintaining a stable position. These results suggest that residual effort on the affected side can produce an effective command signal for poststroke assistive devices.

  17. Deictic primitives for general purpose navigation

    NASA Technical Reports Server (NTRS)

    Crismann, Jill D.

    1994-01-01

    A visually-based deictic primative used as an elementary command set for general purpose navigation was investigated. It was shown that a simple 'follow your eyes' scenario is sufficient for tracking a moving target. Limitations of velocity, acceleration, and modeling of the response of the mechanical systems were enforced. Realistic paths of the robots were produced during the simulation. Scientists could remotely command a planetary rover to go to a particular rock formation that may be interesting. Similarly an expert at plant maintenance could obtain diagnostic information remotely by using deictic primitives on a mobile are used in the deictic primitives, we could imagine that the exact same control software could be used for all of these applications.

  18. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    ERIC Educational Resources Information Center

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  19. The Ethics of Robotic, Autonomous, and Unmanned Systems Technologies in Life Saving Roles

    DTIC Science & Technology

    2017-06-12

    every day. The healthcare community is governed by esteemed practices such as moral self -governance, and other philosophical principles that define...1 movement of a casualty. These assets are under the purview of ground commanders, not the medical evacuation community .4 Thus, having... communications with ground personnel and will operate under supervisory control of any operator, requiring no specialized training.18 Also, ONR is investigating

  20. Autonomous Robot Control via Autonomy Levels (ARCAL)

    DTIC Science & Technology

    2015-08-21

    same simulated objects. VRF includes a detailed graphical user interface (GUI) front end that subscribes to objects over HLA and renders them, along...forces.html 8. Gao, H., LI, Z., and Zhao, X., "The User -defined and Func- tion-strengthened for CGF of VR -Forces [J]." Computer Simulation, vol. 6...info Scout vehicle commands Scout vehicle Sensor measurements Mission vehicle Mission goals Operator interface Scout belief update Logistics

  1. Autonomous Robot Control via Autonomy Levels (ARCAL)

    DTIC Science & Technology

    2015-06-25

    simulated objects. VRF includes a detailed graphical user interface (GUI) front end that subscribes to objects over HLA and renders them, along...forces.html 8. Gao, H., LI, Z., and Zhao, X., "The User -defined and Func- tion-strengthened for CGF of VR -Forces [J]." Computer Simulation, vol. 6, 2007...info Scout vehicle commands Scout vehicle Sensor measurements Mission vehicle Mission goals Operator interface Scout belief update Logistics executive

  2. A simple approach to a vision-guided unmanned vehicle

    NASA Astrophysics Data System (ADS)

    Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye

    2005-10-01

    This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.

  3. Robonaut 2 and Watson: Cognitive Dexterity for Future Exploration

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Strawser, Philip; Farrell, Logan; Goza, S. Michael; Claunch, Charles A.; Chancey, Raphael; Potapinski, Russell

    2018-01-01

    Future exploration missions will dictate a level of autonomy never before experienced in human spaceflight. Mission plans involving the uncrewed phases of complex human spacecraft in deep space will require a coordinated autonomous capability to be able to maintain the spacecraft when ground control is not available. One promising direction involves embedding intelligence into the system design both through the employment of state-of-the-art system engineering principles as well as through the creation of a cognitive network between a smart spacecraft or habitat and embodiments of cognitive agents. The work described here details efforts to integrate IBM's Watson and other cognitive computing services into NASA Johnson Space Center (JSC)'s Robonaut 2 (R2) anthropomorphic robot. This paper also discusses future directions this work will take. A cognitive spacecraft management system that is able to seamlessly collect data from subsystems, determine corrective actions, and provide commands to enable those actions is the end goal. These commands could be to embedded spacecraft systems or to a set of robotic assets that are tied into the cognitive system. An exciting collaboration with Woodside provides a promising Earth-bound testing analog, as controlling and maintaining not normally manned off-shore platforms have similar constraints to the space missions described.

  4. Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI.

    PubMed

    Stawicki, Piotr; Gembler, Felix; Volosyak, Ivan

    2016-01-01

    Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system.

  5. Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI

    PubMed Central

    2016-01-01

    Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system. PMID:27528864

  6. An A-Mazing Logo Experiment.

    ERIC Educational Resources Information Center

    Harris, Ross J.

    1983-01-01

    Discusses what can be done with a LOGO turtle robot, how it is different from doing LOGO with the computer-screen turtle, and the educational value of the device. Sample programs are provided, including one in which the robot turtle can be commanded to react to meeting an obstacle. (JN)

  7. SpaceX_CRS14_Release_2018_125_1300_649273

    NASA Image and Video Library

    2018-05-07

    U.S. COMMERCIAL CARGO SHIP DEPARTS THE INTERNATIONAL SPACE STATION The upiloted SpaceX Dragon cargo craft departed the International Space Station May 5 after a four-week delivery run in which thousands of pounds of supplies and science experiments arrived at the orbiting laboratory. Robotic ground controllers sent commands to release Dragon from the grasp of the Canadarm2 robotic arm, after which several firings of the Dragon’s engine sent the vehicle to a safe distance from the station. Later in the day, SpaceX flight controllers conducted a deorbit burn for Dragon, enabling it to return to Earth for a splashdown in the Pacific some 400 miles southwest of Long Beach, California. Dragon returned some two tons of vital science experiments for researchers and other critical components from the station for refurbishment.

  8. Robot geometry calibration

    NASA Technical Reports Server (NTRS)

    Hayati, Samad; Tso, Kam; Roston, Gerald

    1988-01-01

    Autonomous robot task execution requires that the end effector of the robot be positioned accurately relative to a reference world-coordinate frame. The authors present a complete formulation to identify the actual robot geometric parameters. The method applies to any serial link manipulator with arbitrary order and combination of revolute and prismatic joints. A method is also presented to solve the inverse kinematic of the actual robot model which usually is not a so-called simple robot. Experimental results performed by utilizing a PUMA 560 with simple measurement hardware are presented. As a result of this calibration a precision move command is designed and integrated into a robot language, RCCL, and used in the NASA Telerobot Testbed.

  9. Robotic real-time translational and rotational head motion correction during frameless stereotactic radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary

    Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared headmore » position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.« less

  10. Design and validation of an intelligent wheelchair towards a clinically-functional outcome

    PubMed Central

    2013-01-01

    Background Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. Methods The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. Results User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. Conclusions The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode. PMID:23773851

  11. 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.

    PubMed

    Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D

    2017-12-01

    The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.

  12. Adjustably Autonomous Multi-agent Plan Execution with an Internal Spacecraft Free-Flying Robot Prototype

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.; Nicewarner, Keith

    2006-01-01

    We present an multi-agent model-based autonomy architecture with monitoring, planning, diagnosis, and execution elements. We discuss an internal spacecraft free-flying robot prototype controlled by an implementation of this architecture and a ground test facility used for development. In addition, we discuss a simplified environment control life support system for the spacecraft domain also controlled by an implementation of this architecture. We discuss adjustable autonomy and how it applies to this architecture. We describe an interface that provides the user situation awareness of both autonomous systems and enables the user to dynamically edit the plans prior to and during execution as well as control these agents at various levels of autonomy. This interface also permits the agents to query the user or request the user to perform tasks to help achieve the commanded goals. We conclude by describing a scenario where these two agents and a human interact to cooperatively detect, diagnose and recover from a simulated spacecraft fault.

  13. Walking robot: A design project for undergraduate students

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The design and construction of the University of Maryland walking machine was completed during the 1989 to 1990 academic year. It was required that the machine be capable of completing a number of tasks including walking a straight line, turning to change direction, and manuevering over an obstacle such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear box and crank arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating this machine about this support. The machine can be controlled by using either a user-operated remote tether or the onboard computer for the execution of control commands. Absolute encoders are attached to all motors to provide the control computer with information regarding the status of the motors. Long and short range infrared sensors provide the computer with feedback information regarding the machine's position relative to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.

  14. Synchronized computational architecture for generalized bilateral control of robot arms

    NASA Technical Reports Server (NTRS)

    Szakaly, Zoltan F. (Inventor)

    1991-01-01

    A master six degree of freedom Force Reflecting Hand Controller (FRHC) is available at a master site where a received image displays, in essentially real time, a remote robotic manipulator which is being controlled in the corresponding six degree freedom by command signals which are transmitted to the remote site in accordance with the movement of the FRHC at the master site. Software is user-initiated at the master site in order to establish the basic system conditions, and then a physical movement of the FRHC in Cartesean space is reflected at the master site by six absolute numbers that are sensed, translated and computed as a difference signal relative to the earlier position. The change in position is then transmitted in that differential signal form over a high speed synchronized bilateral communication channel which simultaneously returns robot-sensed response information to the master site as forces applied to the FRHC so that the FRHC reflects the feel of what is taking place at the remote site. A system wide clock rate is selected at a sufficiently high rate that the operator at the master site experiences the Force Reflecting operation in real time.

  15. Telemetry distribution and processing for the second German Spacelab Mission D-2

    NASA Technical Reports Server (NTRS)

    Rabenau, E.; Kruse, W.

    1994-01-01

    For the second German Spacelab Mission D-2 all activities related to operating, monitoring and controlling the experiments on board the Spacelab were conducted from the German Space Operations Control Center (GSOC) operated by the Deutsche Forschungsanstalt fur Luft- und Raumfahrt (DLR) in Oberpfaffenhofen, Germany. The operational requirements imposed new concepts on the transfer of data between Germany and the NASA centers and the processing of data at the GSOC itself. Highlights were the upgrade of the Spacelab Data Processing Facility (SLDPF) to real time data processing, the introduction of packet telemetry and the development of the high-rate data handling front end, data processing and display systems at GSOC. For the first time, a robot on board the Spacelab was to be controlled from the ground in a closed loop environment. A dedicated forward channel was implemented to transfer the robot manipulation commands originating from the robotics experiment ground station to the Spacelab via the Orbiter's text and graphics system interface. The capability to perform telescience from an external user center was implemented. All interfaces proved successful during the course of the D-2 mission and are described in detail in this paper.

  16. Various views of the STS-103 crew on the flight deck

    NASA Image and Video Library

    2000-01-26

    STS103-334-002 (19-27 December 1999) ---.Astronauts Jean-Francois Clervoy (left).and Curtis L. Brown, Jr. communicate with ground controllers on Discovery's flight deck. Brown is mission commander for NASA's third servicing mission to the Hubble Space Telescope (HST) and.Clervoy is a mission specialist representing the European Space Agency (ESA). Clervoy was the prime operator of the remote manipulator system (RMS), the robotic arm on the Space Shuttle.

  17. Evaluation of head orientation and neck muscle EMG signals as three-dimensional command sources.

    PubMed

    Williams, Matthew R; Kirsch, Robert F

    2015-03-05

    High cervical spinal cord injuries result in significant functional impairments and affect both the injured individual as well as their family and care givers. To help restore function to these individuals, multiple user interfaces are available to enable command and control of external devices. However, little work has been performed to assess the 3D performance of these interfaces. We investigated the performance of eight human subjects in using three user interfaces (head orientation, EMG from muscles of the head and neck, and a three-axis joystick) to command the endpoint position of a multi-axis robotic arm within a 3D workspace to perform a novel out-to-center 3D Fitts' Law style task. Two of these interfaces (head orientation, EMG from muscles of the head and neck) could realistically be used by individuals with high tetraplegia, while the joystick was evaluated as a standard of high performance. Performance metrics were developed to assess the aspects of command source performance. Data were analyzed using a mixed model design ANOVA. Fixed effects were investigated between sources as well as for interactions between index of difficulty, command source, and the five performance measures used. A 5% threshold for statistical significance was used in the analysis. The performances of the three command interfaces were rather similar, though significant differences between command sources were observed. The apparent similarity is due in large part to the sequential command strategy (i.e., one dimension of movement at a time) typically adopted by the subjects. EMG-based commands were particularly pulsatile in nature. The use of sequential commands had a significant impact on each command source's performance for movements in two or three dimensions. While the sequential nature of the commands produced by the user did not fit with Fitts' Law, the other performance measures used were able to illustrate the properties of each command source. Though pulsatile, given the overall similarity between head orientation and the EMG interface, (which also could be readily included in a future implanted neuroprosthesis) the use of EMG as a command source for controlling an arm in 3D space is an attractive choice.

  18. Robotic Exploration: The Role of Science Autonomy

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; DeVincenzi, D. (Technical Monitor)

    2002-01-01

    Historical mission operations have involved: (1) commands transmitted to the craft; (2) execution of commands; (3) return of scientific data; (4) evaluation of these data by scientists; and (5) recommendations for future mission activity by scientists. This cycle is repeated throughout the mission with command opportunities once or twice per day. For a rover, this historical cycle is not amenable to rapid long range traverses or rapid response to any novel or unexpected situations.

  19. Decoupling Identification for Serial Two-Link Two-Inertia System

    NASA Astrophysics Data System (ADS)

    Oaki, Junji; Adachi, Shuichi

    The purpose of our study is to develop a precise model by applying the technique of system identification for the model-based control of a nonlinear robot arm, under taking joint-elasticity into consideration. We previously proposed a systematic identification method, called “decoupling identification,” for a “SCARA-type” planar two-link robot arm with elastic joints caused by the Harmonic-drive® reduction gears. The proposed method serves as an extension of the conventional rigid-joint-model-based identification. The robot arm is treated as a serial two-link two-inertia system with nonlinearity. The decoupling identification method using link-accelerometer signals enables the serial two-link two-inertia system to be divided into two linear one-link two-inertia systems. The MATLAB®'s commands for state-space model estimation are utilized in the proposed method. Physical parameters such as motor inertias, link inertias, joint-friction coefficients, and joint-spring coefficients are estimated through the identified one-link two-inertia systems using a gray-box approach. This paper describes accuracy evaluations using the two-link arm for the decoupling identification method under introducing closed-loop-controlled elements and varying amplitude-setup of identification-input. Experimental results show that the identification method also works with closed-loop-controlled elements. Therefore, the identification method is applicable to a “PUMA-type” vertical robot arm under gravity.

  20. Thermal tracking in mobile robots for leak inspection activities.

    PubMed

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-10-09

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.

  1. Thermal Tracking in Mobile Robots for Leak Inspection Activities

    PubMed Central

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-01-01

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684

  2. EVA Roadmap: New Space Suit for the 21st Century

    NASA Technical Reports Server (NTRS)

    Yowell, Robert

    1998-01-01

    New spacesuit design considerations for the extra vehicular activity (EVA) of a manned Martian exploration mission are discussed. Considerations of the design includes:(1) regenerable CO2 removal, (2) a portable life support system (PLSS) which would include cryogenic oxygen produced from in-situ manufacture, (3) a power supply for the EVA, (4) the thermal control systems, (5) systems engineering, (5) space suit systems (materials, and mobility), (6) human considerations, such as improved biomedical sensors and astronaut comfort, (7) displays and controls, and robotic interfaces, such as rovers, and telerobotic commands.

  3. Remote image analysis for Mars Exploration Rover mobility and manipulation operations

    NASA Technical Reports Server (NTRS)

    Leger, Chris; Deen, Robert G.; Bonitz, Robert G.

    2005-01-01

    NASA's Mars Exploration Rovers are two sixwheeled, 175-kg robotic vehicles which have operated on Mars for over a year as of March 2005. The rovers are controlled by teams who must understand the rover's surroundings and develop command sequences on a daily basis. The tight tactical planning timeline and everchanging environment call for tools that allow quick assessment of potential manipulator targets and traverse goals, since command sequences must be developed in a matter of hours after receipt of new data from the rovers. Reachability maps give a visual indication of which targets are reachable by each rover's manipulator, while slope and solar energy maps show the rover operator which terrain areas are safe and unsafe from different standpoints.

  4. Robotic wheelchair commanded by SSVEP, motor imagery and word generation.

    PubMed

    Bastos, Teodiano F; Muller, Sandra M T; Benevides, Alessandro B; Sarcinelli-Filho, Mario

    2011-01-01

    This work presents a robotic wheelchair that can be commanded by a Brain Computer Interface (BCI) through Steady-State Visual Evoked Potential (SSVEP), Motor Imagery and Word Generation. When using SSVEP, a statistical test is used to extract the evoked response and a decision tree is used to discriminate the stimulus frequency, allowing volunteers to online operate the BCI, with hit rates varying from 60% to 100%, and guide a robotic wheelchair through an indoor environment. When using motor imagery and word generation, three mental task are used: imagination of left or right hand, and imagination of generation of words starting with the same random letter. Linear Discriminant Analysis is used to recognize the mental tasks, and the feature extraction uses Power Spectral Density. The choice of EEG channel and frequency uses the Kullback-Leibler symmetric divergence and a reclassification model is proposed to stabilize the classifier.

  5. Motor-commands decoding using peripheral nerve signals: a review

    NASA Astrophysics Data System (ADS)

    Hong, Keum-Shik; Aziz, Nida; Ghafoor, Usman

    2018-06-01

    During the last few decades, substantial scientific and technological efforts have been focused on the development of neuroprostheses. The major emphasis has been on techniques for connecting the human nervous system with a robotic prosthesis via natural-feeling interfaces. The peripheral nerves provide access to highly processed and segregated neural command signals from the brain that can in principle be used to determine user intent and control muscles. If these signals could be used, they might allow near-natural and intuitive control of prosthetic limbs with multiple degrees of freedom. This review summarizes the history of neuroprosthetic interfaces and their ability to record from and stimulate peripheral nerves. We also discuss the types of interfaces available and their applications, the kinds of peripheral nerve signals that are used, and the algorithms used to decode them. Finally, we explore the prospects for future development in this area.

  6. Robotic end-effector for rewaterproofing shuttle tiles

    NASA Astrophysics Data System (ADS)

    Manouchehri, Davoud; Hansen, Joseph M.; Wu, Cheng M.; Yamamoto, Brian S.; Graham, Todd

    1992-11-01

    This paper summarizes work by Rockwell International's Space Systems Division's Robotics Group at Downey, California. The work is part of a NASA-led team effort to automate Space Shuttle rewaterproofing in the Orbiter Processing Facility at the Kennedy Space Center and the ferry facility at the Ames-Dryden Flight Research Facility. Rockwell's effort focuses on the rewaterproofing end-effector, whose function is to inject hazardous dimethylethyloxysilane into thousands of ceramic tiles on the underside of the orbiter after each flight. The paper has five sections. First, it presents background on the present manual process. Second, end-effector requirements are presented, including safety and interface control. Third, a design is presented for the five end-effector systems: positioning, delivery, containment, data management, and command and control. Fourth, end-effector testing and integrating to the total system are described. Lastly, future applications for this technology are discussed.

  7. Currie at RMS controls on the aft flight deck

    NASA Image and Video Library

    1998-12-05

    S88-E-5010 (12-05-98) --- Operating at a control panel on Endeavour's aft flight deck, astronaut Nancy J. Currie works with the robot arm prior to mating the 12.8-ton Unity connecting module to Endeavour's docking system. The mating took place on late afternoon of Dec. 5. A nearby monitor provides a view of the remote manipulator system's (RMS) movements in the cargo bay. The feat marked an important step in assembling the new International Space Station. Manipulating the shuttle's 50-foot-long robot arm, Currie placed Unity just inches above the extended outer ring on Endeavour's docking mechanism, enabling Robert D. Cabana, mission commander to fire downward maneuvering jets, locking the shuttle's docking system to one of two Pressurized Mating Adapters (PMA) attached to Unity. The mating occurred at 5:45 p.m. Central time, as Endeavour sailed over eastern China.

  8. Adaptive control of dual-arm robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    Three strategies for adaptive control of cooperative dual-arm robots are described. In the position-position control strategy, the adaptive controllers ensure that the end-effector positions of both arms track desired trajectories in Cartesian space despite unknown time-varying interaction forces exerted through the load. In the position-hybrid control strategy, the adaptive controller of one arm controls end-effector motions in the free directions and applied forces in the constraint directions, while the adaptive controller of the other arm ensures that the end-effector tracks desired position trajectories. In the hybrid-hybrid control strategy, the adaptive controllers ensure that both end-effectors track reference position trajectories while simultaneously applying desired forces on the load. In all three control strategies, the cross-coupling effects between the arms are treated as disturbances which are rejected by the adaptive controllers while following desired commands in a common frame of reference. The adaptive controllers do not require the complex mathematical model of the arm dynamics or any knowledge of the arm dynamic parameters or the load parameters such as mass and stiffness. The controllers have simple structures and are computationally fast for on-line implementation with high sampling rates.

  9. Types of verbal interaction with instructable robots

    NASA Technical Reports Server (NTRS)

    Crangle, C.; Suppes, P.; Michalowski, S.

    1987-01-01

    An instructable robot is one that accepts instruction in some natural language such as English and uses that instruction to extend its basic repertoire of actions. Such robots are quite different in conception from autonomously intelligent robots, which provide the impetus for much of the research on inference and planning in artificial intelligence. Examined here are the significant problem areas in the design of robots that learn from vebal instruction. Examples are drawn primarily from our earlier work on instructable robots and recent work on the Robotic Aid for the physically disabled. Natural-language understanding by machines is discussed as well as in the possibilities and limits of verbal instruction. The core problem of verbal instruction, namely, how to achieve specific concrete action in the robot in response to commands that express general intentions, is considered, as are two major challenges to instructability: achieving appropriate real-time behavior in the robot, and extending the robot's language capabilities.

  10. An adaptive controller for enhancing operator performance during teleoperation

    NASA Technical Reports Server (NTRS)

    Carignan, Craig R.; Tarrant, Janice M.; Mosier, Gary E.

    1989-01-01

    An adaptive controller is developed for adjusting robot arm parameters while manipulating payloads of unknown mass and inertia. The controller is tested experimentally in a master/slave configuration where the adaptive slave arm is commanded via human operator inputs from a master. Kinematically similar six-joint master and slave arms are used with the last three joints locked for simplification. After a brief initial adaptation period for the unloaded arm, the slave arm retrieves different size payloads and maneuvers them about the workspace. Comparisons are then drawn with similar tasks where the adaptation is turned off. Several simplifications of the controller dynamics are also addressed and experimentally verified.

  11. Anticipatory detection of turning in humans for intuitive control of robotic mobility assistance.

    PubMed

    Farkhatdinov, Ildar; Roehri, Nicolas; Burdet, Etienne

    2017-09-26

    Many wearable lower-limb robots for walking assistance have been developed in recent years. However, it remains unclear how they can be commanded in an intuitive and efficient way by their user. In particular, providing robotic assistance to neurologically impaired individuals in turning remains a significant challenge. The control should be safe to the users and their environment, yet yield sufficient performance and enable natural human-machine interaction. Here, we propose using the head and trunk anticipatory behaviour in order to detect the intention to turn in a natural, non-intrusive way, and use it for triggering turning movement in a robot for walking assistance. We therefore study head and trunk orientation during locomotion of healthy adults, and investigate upper body anticipatory behaviour during turning. The collected walking and turning kinematics data are clustered using the k-means algorithm and cross-validation tests and k-nearest neighbours method are used to evaluate the performance of turning detection during locomotion. Tests with seven subjects exhibited accurate turning detection. Head anticipated turning by more than 400-500 ms in average across all subjects. Overall, the proposed method detected turning 300 ms after its initiation and 1230 ms before the turning movement was completed. Using head anticipatory behaviour enabled to detect turning faster by about 100 ms, compared to turning detection using only pelvis orientation measurements. Finally, it was demonstrated that the proposed turning detection can improve the quality of human-robot interaction by improving the control accuracy and transparency.

  12. Whole-body Motion Planning with Simple Dynamics and Full Kinematics

    DTIC Science & Technology

    2014-08-01

    optimizations can take an excessively long time to run, and may also suffer from local minima. Thus, this approach can become intractable for complex robots...motions like jumping and climbing. Additionally, the point-mass model suggests that the centroidal angular momentum is zero, which is not valid for motions...use in the DARPA Robotics Challenge. A. Jumping Our first example is to command the robot to jump off the ground, as illustrated in Fig.4. We assign

  13. Fast and Efficient Radiological Interventions via a Graphical User Interface Commanded Magnetic Resonance Compatible Robotic Device

    PubMed Central

    Özcan, Alpay; Christoforou, Eftychios; Brown, Daniel; Tsekos, Nikolaos

    2011-01-01

    The graphical user interface for an MR compatible robotic device has the capability of displaying oblique MR slices in 2D and a 3D virtual environment along with the representation of the robotic arm in order to swiftly complete the intervention. Using the advantages of the MR modality the device saves time and effort, is safer for the medical staff and is more comfortable for the patient. PMID:17946067

  14. New luster for space robots and automation

    NASA Technical Reports Server (NTRS)

    Heer, E.

    1978-01-01

    Consideration is given to the potential role of robotics and automation in space transportation systems. Automation development requirements are defined for projects in space exploration, global services, space utilization, and space transport. In each category the potential automation of ground operations, on-board spacecraft operations, and in-space handling is noted. The major developments of space robot technology are noted for the 1967-1978 period. Economic aspects of ground-operation, ground command, and mission operations are noted.

  15. A bio-inspired kinematic controller for obstacle avoidance during reaching tasks with real robots.

    PubMed

    Srinivasa, Narayan; Bhattacharyya, Rajan; Sundareswara, Rashmi; Lee, Craig; Grossberg, Stephen

    2012-11-01

    This paper describes a redundant robot arm that is capable of learning to reach for targets in space in a self-organized fashion while avoiding obstacles. Self-generated movement commands that activate correlated visual, spatial and motor information are used to learn forward and inverse kinematic control models while moving in obstacle-free space using the Direction-to-Rotation Transform (DIRECT). Unlike prior DIRECT models, the learning process in this work was realized using an online Fuzzy ARTMAP learning algorithm. The DIRECT-based kinematic controller is fault tolerant and can handle a wide range of perturbations such as joint locking and the use of tools despite not having experienced them during learning. The DIRECT model was extended based on a novel reactive obstacle avoidance direction (DIRECT-ROAD) model to enable redundant robots to avoid obstacles in environments with simple obstacle configurations. However, certain configurations of obstacles in the environment prevented the robot from reaching the target with purely reactive obstacle avoidance. To address this complexity, a self-organized process of mental rehearsals of movements was modeled, inspired by human and animal experiments on reaching, to generate plans for movement execution using DIRECT-ROAD in complex environments. These mental rehearsals or plans are self-generated by using the Fuzzy ARTMAP algorithm to retrieve multiple solutions for reaching each target while accounting for all the obstacles in its environment. The key aspects of the proposed novel controller were illustrated first using simple examples. Experiments were then performed on real robot platforms to demonstrate successful obstacle avoidance during reaching tasks in real-world environments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Decoding static and dynamic arm and hand gestures from the JPL BioSleeve

    NASA Astrophysics Data System (ADS)

    Wolf, M. T.; Assad, C.; Stoica, A.; You, Kisung; Jethani, H.; Vernacchia, M. T.; Fromm, J.; Iwashita, Y.

    This paper presents methods for inferring arm and hand gestures from forearm surface electromyography (EMG) sensors and an inertial measurement unit (IMU). These sensors, together with their electronics, are packaged in an easily donned device, termed the BioSleeve, worn on the forearm. The gestures decoded from BioSleeve signals can provide natural user interface commands to computers and robots, without encumbering the users hands and without problems that hinder camera-based systems. Potential aerospace applications for this technology include gesture-based crew-autonomy interfaces, high degree of freedom robot teleoperation, and astronauts' control of power-assisted gloves during extra-vehicular activity (EVA). We have developed techniques to interpret both static (stationary) and dynamic (time-varying) gestures from the BioSleeve signals, enabling a diverse and adaptable command library. For static gestures, we achieved over 96% accuracy on 17 gestures and nearly 100% accuracy on 11 gestures, based solely on EMG signals. Nine dynamic gestures were decoded with an accuracy of 99%. This combination of wearableEMGand IMU hardware and accurate algorithms for decoding both static and dynamic gestures thus shows promise for natural user interface applications.

  17. Stability effects of singularities in force-controlled robotic assist devices

    NASA Astrophysics Data System (ADS)

    Luecke, Greg R.

    2002-02-01

    Force feedback is being used as an interface between humans and material handling equipment to provide an intuitive method to control large and bulky payloads. Powered actuation in the lift assist device compensates for the inertial characteristics of the manipulator and the payload to provide effortless control and handling of manufacturing parts, components, and assemblies. The use of these Intelligent Assist Devices (IAD) is being explored to prevent worker injury, enhance material handling performance, and increase productivity in the workplace. The IAD also provides the capability to shape and control motion in the workspace during routine operations. Virtual barriers can be developed to protect fixed objects in the workspace, and regions can be programmed that attract the work piece to a certain position and orientation. However, the robot is still under complete control of the human operator, with the trajectory being determined and commanded using the judgment of the operator to complete a given task. In many cases, the IAD is built in a configuration that may have singular points inside the workspace. These singularities can cause problems when the unstructured trajectory commands from the human cause interaction between the IAD and the virtual walls and fixtures at positions close to these singularities. The research presented here explores the stability effects of the interactions between the powered manipulator and the virtual surfaces when controlled by the operator. Because of the flexible nature of the human decisions determining the real time work piece paths, manipulator singularities that occur in conjunction with the virtual surfaces raise stability issues in the performance around these singularities. We examine these stability issues in the context of a particular IAD configuration, and present analytic results for the performance and stability of these systems in response to the real-time trajectory modification of the human operator.

  18. Conversion and control of an all-terrain vehicle for use as an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Jacob, John S.; Gunderson, Robert W.; Fullmer, R. R.

    1998-08-01

    A systematic approach to ground vehicle automation is presented, combining low-level controls, trajectory generation and closed-loop path correction in an integrated system. Development of cooperative robotics for precision agriculture at Utah State University required the automation of a full-scale motorized vehicle. The Triton Predator 8- wheeled skid-steering all-terrain vehicle was selected for the project based on its ability to maneuver precisely and the simplicity of controlling the hydrostatic drivetrain. Low-level control was achieved by fitting an actuator on the engine throttle, actuators for the left and right drive controls, encoders on the left and right drive shafts to measure wheel speeds, and a signal pick-off on the alternator for measuring engine speed. Closed loop control maintains a desired engine speed and tracks left and right wheel speeds commands. A trajectory generator produces the wheel speed commands needed to steer the vehicle through a predetermined set of map coordinates. A planar trajectory through the points is computed by fitting a 2D cubic spline over each path segment while enforcing initial and final orientation constraints at segment endpoints. Acceleration and velocity profiles are computed for each trajectory segment, with the velocity over each segment dependent on turning radius. Left and right wheel speed setpoints are obtained by combining velocity and path curvature for each low-level timestep. The path correction algorithm uses GPS position and compass orientation information to adjust the wheel speed setpoints according to the 'crosstrack' and 'downtrack' errors and heading error. Nonlinear models of the engine and the skid-steering vehicle/ground interaction were developed for testing the integrated system in simulation. These test lead to several key design improvements which assisted final implementation on the vehicle.

  19. Towards the development of a spring-based continuum robot for neurosurgery

    NASA Astrophysics Data System (ADS)

    Kim, Yeongjin; Cheng, Shing Shin; Desai, Jaydev P.

    2015-03-01

    Brain tumor is usually life threatening due to the uncontrolled growth of abnormal cells native to the brain or the spread of tumor cells from outside the central nervous system to the brain. The risks involved in carrying out surgery within such a complex organ can cause severe anxiety in cancer patients. However, neurosurgery, which remains one of the more effective ways of treating brain tumors focused in a confined volume, can have a tremendously increased success rate if the appropriate imaging modality is used for complete tumor removal. Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast and is the imaging modality of choice for brain tumor imaging. MRI combined with continuum soft robotics has immense potential to be the revolutionary treatment technique in the field of brain cancer. It eliminates the concern of hand tremor and guarantees a more precise procedure. One of the prototypes of Minimally Invasive Neurosurgical Intracranial Robot (MINIR-II), which can be classified as a continuum soft robot, consists of a snake-like body made of three segments of rapid prototyped plastic springs. It provides improved dexterity with higher degrees of freedom and independent joint control. It is MRI-compatible, allowing surgeons to track and determine the real-time location of the robot relative to the brain tumor target. The robot was manufactured in a single piece using rapid prototyping technology at a low cost, allowing it to disposable after each use. MINIR-II has two DOFs at each segment with both joints controlled by two pairs of MRI-compatible SMA spring actuators. Preliminary motion tests have been carried out using vision-tracking method and the robot was able to move to different positions based on user commands.

  20. Firing Room Remote Application Software Development & Swamp Works Laboratory Robot Software Development

    NASA Technical Reports Server (NTRS)

    Garcia, Janette

    2016-01-01

    The National Aeronautics and Space Administration (NASA) is creating a way to send humans beyond low Earth orbit, and later to Mars. Kennedy Space Center (KSC) is working to make this possible by developing a Spaceport Command and Control System (SCCS) which will allow the launch of Space Launch System (SLS). This paper's focus is on the work performed by the author in her first and second part of the internship as a remote application software developer. During the first part of her internship, the author worked on the SCCS's software application layer by assisting multiple ground subsystems teams including Launch Accessories (LACC) and Environmental Control System (ECS) on the design, development, integration, and testing of remote control software applications. Then, on the second part of the internship, the author worked on the development of robot software at the Swamp Works Laboratory which is a research and technology development group which focuses on inventing new technology to help future In-Situ Resource Utilization (ISRU) missions.

  1. A study on a robot arm driven by three-dimensional trajectories predicted from non-invasive neural signals.

    PubMed

    Kim, Yoon Jae; Park, Sung Woo; Yeom, Hong Gi; Bang, Moon Suk; Kim, June Sic; Chung, Chun Kee; Kim, Sungwan

    2015-08-20

    A brain-machine interface (BMI) should be able to help people with disabilities by replacing their lost motor functions. To replace lost functions, robot arms have been developed that are controlled by invasive neural signals. Although invasive neural signals have a high spatial resolution, non-invasive neural signals are valuable because they provide an interface without surgery. Thus, various researchers have developed robot arms driven by non-invasive neural signals. However, robot arm control based on the imagined trajectory of a human hand can be more intuitive for patients. In this study, therefore, an integrated robot arm-gripper system (IRAGS) that is driven by three-dimensional (3D) hand trajectories predicted from non-invasive neural signals was developed and verified. The IRAGS was developed by integrating a six-degree of freedom robot arm and adaptive robot gripper. The system was used to perform reaching and grasping motions for verification. The non-invasive neural signals, magnetoencephalography (MEG) and electroencephalography (EEG), were obtained to control the system. The 3D trajectories were predicted by multiple linear regressions. A target sphere was placed at the terminal point of the real trajectories, and the system was commanded to grasp the target at the terminal point of the predicted trajectories. The average correlation coefficient between the predicted and real trajectories in the MEG case was [Formula: see text] ([Formula: see text]). In the EEG case, it was [Formula: see text] ([Formula: see text]). The success rates in grasping the target plastic sphere were 18.75 and 7.50 % with MEG and EEG, respectively. The success rates of touching the target were 52.50 and 58.75 % respectively. A robot arm driven by 3D trajectories predicted from non-invasive neural signals was implemented, and reaching and grasping motions were performed. In most cases, the robot closely approached the target, but the success rate was not very high because the non-invasive neural signal is less accurate. However the success rate could be sufficiently improved for practical applications by using additional sensors. Robot arm control based on hand trajectories predicted from EEG would allow for portability, and the performance with EEG was comparable to that with MEG.

  2. Force reflection with compliance control

    NASA Technical Reports Server (NTRS)

    Kim, Won S. (Inventor)

    1993-01-01

    Two types of systems for force-reflecting control, which enables high force-reflection gain, are presented: position-error-based force reflection and low-pass-filtered force reflection. Both of the systems are combined with shared compliance control. In the position-error-based class, the position error between the commanded and the actual position of a compliantly controlled robot is used to provide force reflection. In the low-pass-filtered force reflection class, the low-pass-filtered output of the compliance control is used to provide force reflection. The increase in force reflection gain can be more than 10-fold as compared to a conventional high-bandwidth pure force reflection system, when high compliance values are used for the compliance control.

  3. Towards Commanding Unmanned Ground Vehicle Movement in Unfamiliar Environments Using Unconstrained English: Initial Research Results

    DTIC Science & Technology

    2007-06-01

    constrained list of command words could be valuable in many systems, as would the ability of driverless vehicles to navigate through a route...Sensemaking in UGVs • Future Combat Systems UGV roles – Driverless trucks – Robotic mules (soldier, squad aid) – Intelligent munitions – And more! • Some

  4. Autonomous mobile robot teams

    NASA Technical Reports Server (NTRS)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  5. Spider World: A Robot Language for Learning to Program. Assessing the Cognitive Consequences of Computer Environments for Learning (ACCCEL).

    ERIC Educational Resources Information Center

    Dalbey, John; Linn, Marcia

    Spider World is an interactive program designed to help individuals with no previous computer experience to learn the fundamentals of programming. The program emphasizes cognitive tasks which are central to programming and provides significant problem-solving opportunities. In Spider World, the user commands a hypothetical robot (called the…

  6. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135163 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  7. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135148 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  8. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135140 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  9. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135185 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  10. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135187 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  11. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135135 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  12. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135157 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  13. Social Studies in Motion: Learning with the Whole Person

    ERIC Educational Resources Information Center

    Schulte, Paige L.

    2005-01-01

    Total Physical Response (TPR), developed by James Asher, is defined as a teaching technique whereby a learner responds to language input with body motions. Performing a chant or the game "Robot" is an example of a TPR activity, where the teacher commands her robots to do some task in the classroom. Acting out stories and giving imperative commands…

  14. Phillips at Robotics Workstation (RWS) in US Laboratory Destiny

    NASA Image and Video Library

    2009-03-20

    S119-E-006748 (20 March 2009) --- Astronauts Lee Archambault, (foreground), STS-119 commander, John Phillips and Sandra Magnus, both mission specialists, are pictured at the robotic workstation in Destiny or the U.S. laboratory. Magnus is winding down a lengthy tour in space aboard the orbiting outpost, and she will return to Earth with the Discovery crew.

  15. A Robot Hand Testbed Designed for Enhancing Embodiment and Functional Neurorehabilitation of Body Schema in Subjects with Upper Limb Impairment or Loss

    PubMed Central

    Hellman, Randall B.; Chang, Eric; Tanner, Justin; Helms Tillery, Stephen I.; Santos, Veronica J.

    2015-01-01

    Many upper limb amputees experience an incessant, post-amputation “phantom limb pain” and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech “rubber hand” illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the “BairClaw” presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger–object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden. PMID:25745391

  16. A robot hand testbed designed for enhancing embodiment and functional neurorehabilitation of body schema in subjects with upper limb impairment or loss.

    PubMed

    Hellman, Randall B; Chang, Eric; Tanner, Justin; Helms Tillery, Stephen I; Santos, Veronica J

    2015-01-01

    Many upper limb amputees experience an incessant, post-amputation "phantom limb pain" and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech "rubber hand" illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the "BairClaw" presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger-object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden.

  17. (abstract) Telecommunications for Mars Rovers and Robotic Missions

    NASA Technical Reports Server (NTRS)

    Cesarone, Robert J.; Hastrup, Rolf C.; Horne, William; McOmber, Robert

    1997-01-01

    Telecommunications plays a key role in all rover and robotic missions to Mars both as a conduit for command information to the mission and for scientific data from the mission. Telecommunications to the Earth may be accomplished using direct-to-Earth links via the Deep Space Network (DSN) or by relay links supported by other missions at Mars. This paper reviews current plans for missions to Mars through the 2005 launch opportunity and their capabilities in support of rover and robotic telecommunications.

  18. Mid-sized omnidirectional robot with hydraulic drive and steering

    NASA Astrophysics Data System (ADS)

    Wood, Carl G.; Perry, Trent; Cook, Douglas; Maxfield, Russell; Davidson, Morgan E.

    2003-09-01

    Through funding from the US Army-Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program, Utah State University's (USU) Center for Self-Organizing and Intelligent Systems (CSOIS) has developed the T-series of omni-directional robots based on the USU omni-directional vehicle (ODV) technology. The ODV provides independent computer control of steering and drive in a single wheel assembly. By putting multiple omni-directional (OD) wheels on a chassis, a vehicle is capable of uncoupled translational and rotational motion. Previous robots in the series, the T1, T2, T3, ODIS, ODIS-T, and ODIS-S have all used OD wheels based on electric motors. The T4 weighs approximately 1400 lbs and features a 4-wheel drive wheel configuration. Each wheel assembly consists of a hydraulic drive motor and a hydraulic steering motor. A gasoline engine is used to power both the hydraulic and electrical systems. The paper presents an overview of the mechanical design of the vehicle as well as potential uses of this technology in fielded systems.

  19. Coordination of dual robot arms using kinematic redundancy

    NASA Technical Reports Server (NTRS)

    Suh, Il Hong; Shin, Kang G.

    1988-01-01

    A method is developed to coordinate the motion of dual robot arms carrying a solid object, where the first robot (leader) grasps one end of the object rigidly and the second robot (follower) is allowed to change its grasping position at the other end of the object along the object surface while supporting the object. It is shown that this flexible grasping is equivalent to the addition of one more degree of freedom (dof), giving the follower more maneuvering capabilities. In particular, motion commands for the follower are generated by using kinematic redundancy. To show the utility and power of the method, an example system with two PUMA 560 robots carrying a beam is analyzed.

  20. Remote secure observing for the Faulkes Telescopes

    NASA Astrophysics Data System (ADS)

    Smith, Robert J.; Steele, Iain A.; Marchant, Jonathan M.; Fraser, Stephen N.; Mucke-Herzberg, Dorothea

    2004-09-01

    Since the Faulkes Telescopes are to be used by a wide variety of audiences, both powerful engineering level and simple graphical interfaces exist giving complete remote and robotic control of the telescope over the internet. Security is extremely important to protect the health of both humans and equipment. Data integrity must also be carefully guarded for images being delivered directly into the classroom. The adopted network architecture is described along with the variety of security and intrusion detection software. We use a combination of SSL, proxies, IPSec, and both Linux iptables and Cisco IOS firewalls to ensure only authenticated and safe commands are sent to the telescopes. With an eye to a possible future global network of robotic telescopes, the system implemented is capable of scaling linearly to any moderate (of order ten) number of telescopes.

  1. Sliding Mode Control of a Slewing Flexible Beam

    NASA Technical Reports Server (NTRS)

    Wilson, David G.; Parker, Gordon G.; Starr, Gregory P.; Robinett, Rush D., III

    1997-01-01

    An output feedback sliding mode controller (SMC) is proposed to minimize the effects of vibrations of slewing flexible manipulators. A spline trajectory is used to generate ideal position and velocity commands. Constrained nonlinear optimization techniques are used to both calibrate nonlinear models and determine optimized gains to produce a rest-to-rest, residual vibration-free maneuver. Vibration-free maneuvers are important for current and future NASA space missions. This study required the development of the nonlinear dynamic system equations of motion; robust control law design; numerical implementation; system identification; and verification using the Sandia National Laboratories flexible robot testbed. Results are shown for a slewing flexible beam.

  2. Integrating autonomous distributed control into a human-centric C4ISR environment

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-05-01

    This paper considers incorporating autonomy into human-centric Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) environments. Specifically, it focuses on identifying ways that current autonomy technologies can augment human control and the challenges presented by additive autonomy. Three approaches to this challenge are considered, stemming from prior work in two converging areas. In the first, the problem is approached as augmenting what humans currently do with automation. In the alternate approach, the problem is approached as treating humans as actors within a cyber-physical system-of-systems (stemming from robotic distributed computing). A third approach, combines elements of both of the aforementioned.

  3. Direct model reference adaptive control with application to flexible robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory W.

    1992-01-01

    A modification to a direct command generator tracker-based model reference adaptive control (MRAC) system is suggested in this paper. This modification incorporates a feedforward into the reference model's output as well as the plant's output. Its purpose is to eliminate the bounded model following error present in steady state when previous MRAC systems were used. The algorithm was evaluated using the dynamics for a single-link flexible-joint arm. The results of these simulations show a response with zero steady state model following error. These results encourage further use of MRAC for various types of nonlinear plants.

  4. Study of Command and Control (C&C) Structures on Integrating Unmanned Autonomous Systems (UAS) into Manned Environments

    DTIC Science & Technology

    2012-09-01

    and traveled all the way around Lake Tahoe. The self - driving cars have logged over 140,000 miles since October 9, 2010 (Google 2010) pictured here...UNDERWATER VEHICLES (AUV) STARFISH is the name given to a small team of autonomous robotic fish - a project carried out by the Acoustic Research...www.scribd.com/doc/42245301/Manual-Mine- Clearance-Book1. Accessed July 23, 2012. Google. The Self - Driving Car Logs more Miles on New Wheels. August 7

  5. STS-111 Flight Day 5 Highlights

    NASA Astrophysics Data System (ADS)

    2002-06-01

    On Flight Day 5 of STS-111, the crew of Endeavour (Kenneth Cockrell, Commander; Paul Lockhart, Pilot; Franklin Chang-Diaz, Mission Specialist; Philippe Perrin, Mission Specialist) and the Expedition 5 crew (Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer) and Expedition 4 crew (Yury Onufrienko, Commander; Daniel Bursch, Flight Engineer; Carl Walz, Flight Engineer) are aboard the docked Endeavour and International Space Station (ISS). The ISS cameras show the station in orbit above the North African coast and the Mediterranean Sea, as Chang-Diaz and Perrin prepare for an EVA (extravehicular activity). The Canadarm 2 robotic arm is shown in motion in a wide-angle shot. The Quest Airlock is shown as it opens to allow the astronauts to exit the station. As orbital sunrise approaches, the astronauts are shown already engaged in their EVA activities. Chang-Diaz is shown removing the PDGF (Power and Data Grapple Fixture) from Endeavour's payload bay as Perrin prepares its installation position in the ISS's P6 truss structure; The MPLM is also visible. Following the successful detachment of the PDGF, Chang-Diaz carries it to the installation site as he is transported there by the robotic arm. The astronauts are then shown installing the PDGF, with video provided by helmet-mounted cameras. Following this task, the astronauts are shown preparing the MBS (Mobile Base System) for grappling by the robotic arm. It will be mounted to the Mobile Transporter (MT), which will traverse a railroad-like system along the truss structures of the ISS, and support astronaut activities as well as provide an eventual mobile base for the robotic arm.

  6. Mobile autonomous robotic apparatus for radiologic characterization

    DOEpatents

    Dudar, Aed M.; Ward, Clyde R.; Jones, Joel D.; Mallet, William R.; Harpring, Larry J.; Collins, Montenius X.; Anderson, Erin K.

    1999-01-01

    A mobile robotic system that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console.

  7. Mobile autonomous robotic apparatus for radiologic characterization

    DOEpatents

    Dudar, A.M.; Ward, C.R.; Jones, J.D.; Mallet, W.R.; Harpring, L.J.; Collins, M.X.; Anderson, E.K.

    1999-08-10

    A mobile robotic system is described that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console. 4 figs.

  8. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  9. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  10. Next Generation Robots for STEM Education andResearch at Huston Tillotson University

    DTIC Science & Technology

    2017-11-10

    dynamics through the following command: roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion : After...understood the system’s natural dynamics. roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion ...is created using the following command: roslaunch mtb_lab6_feedback_linearization gravity_inversion.launch Gravity inversion is just one

  11. Can Robots Help the Learning of Skilled Actions?

    PubMed Central

    Reinkensmeyer, David J.; Patton, James L.

    2010-01-01

    Learning to move skillfully requires that the motor system adjusts muscle commands based on ongoing performance errors, a process influenced by the dynamics of the task being practiced. Recent experiments from our laboratories show how robotic devices can temporarily alter task dynamics in ways that contribute to the motor learning experience, suggesting possible applications in rehabilitation and sports training. PMID:19098524

  12. A shared position/force control methodology for teleoperation

    NASA Technical Reports Server (NTRS)

    Lee, Jin S.

    1987-01-01

    A flexible and computationally efficient shared position/force control concept and its implementation in the Robot Control C Library (RCCL) are presented form the point of teleoperation. This methodology enables certain degrees of freedom to be position-controlled through real time manual inputs and the remaining degrees of freedom to be force-controlled by computer. Functionally, it is a hybrid control scheme in that certain degrees of freedom are designated to be under position control, and the remaining degrees of freedom to be under force control. However, the methodology is also a shared control scheme because some degrees of freedom can be put under manual control and the other degrees of freedom put under computer control. Unlike other hybrid control schemes, which process position and force commands independently, this scheme provides a force control loop built on top of a position control inner loop. This feature minimizes the computational burden and increases disturbance rejection. A simple implementation is achieved partly because the joint control servos that are part of most robots can be used to provide the position control inner loop. Along with this control scheme, several menus were implemented for the convenience of the user. The implemented control scheme was successfully demonstrated for the tasks of hinged-panel opening and peg-in-hole insertion.

  13. An Intention-Driven Semi-autonomous Intelligent Robotic System for Drinking.

    PubMed

    Zhang, Zhijun; Huang, Yongqian; Chen, Siyuan; Qu, Jun; Pan, Xin; Yu, Tianyou; Li, Yuanqing

    2017-01-01

    In this study, an intention-driven semi-autonomous intelligent robotic (ID-SIR) system is designed and developed to assist the severely disabled patients to live independently. The system mainly consists of a non-invasive brain-machine interface (BMI) subsystem, a robot manipulator and a visual detection and localization subsystem. Different from most of the existing systems remotely controlled by joystick, head- or eye tracking, the proposed ID-SIR system directly acquires the intention from users' brain. Compared with the state-of-art system only working for a specific object in a fixed place, the designed ID-SIR system can grasp any desired object in a random place chosen by a user and deliver it to his/her mouth automatically. As one of the main advantages of the ID-SIR system, the patient is only required to send one intention command for one drinking task and the autonomous robot would finish the rest of specific controlling tasks, which greatly eases the burden on patients. Eight healthy subjects attended our experiment, which contained 10 tasks for each subject. In each task, the proposed ID-SIR system delivered the desired beverage container to the mouth of the subject and then put it back to the original position. The mean accuracy of the eight subjects was 97.5%, which demonstrated the effectiveness of the ID-SIR system.

  14. Recent testing of a micro autonomous positioning system for multi-object instrumentation

    NASA Astrophysics Data System (ADS)

    Cochrane, W. A.; Atkinson, D. C.; Bailie, T. E. C.; Dickson, C.; Lim, T.; Luo, X.; Montgomery, D. M.; Schnetler, H.; Taylor, W. D.; Wilson, B.

    2012-09-01

    A multiple pick off mirror positioning sub-system has been developed as a solution for the deployment of mirrors within multi-object instrumentation such as the EAGLE instrument in the European Extremely Large Telescope (E-ELT). The positioning sub-system is a two wheeled differential steered friction drive robot with a footprint of approximately 20 x 20 mm. Controlled by RF communications there are two versions of the robot that exist. One is powered by a single cell lithium ion battery and the other utilises a power floor system. The robots use two brushless DC motors with 125:1 planetary gear heads for positioning in the coarse drive stages. A unique power floor allows the robots to be positioned at any location in any orientation on the focal plane. The design, linear repeatability tests, metrology and power continuity of the robot will be evaluated and presented in this paper. To gather photons from the objects of interest it is important to position POMs within a sphere of confusion of less than 10 μm, with an angular alignment better than 1 mrad. The robots potential of meeting these requirements will be described through the open-loop repeatability tests conducted with a Faro laser beam tracker. Tests have involved sending the robot step commands and automatically taking continuous measurements every three seconds. Currently the robot is capable of repeatedly travelling 233 mm within 0.307 mm at 5 mm/s. An analysis of the power floors reliability through the continuous monitoring of the voltage across the tracks with a Pico logger will also be presented.

  15. Human machine interaction via the transfer of power and information signals

    NASA Technical Reports Server (NTRS)

    Kazerooni, H.; Foslien, W. K.; Anderson, B. J.; Hessburg, T. M.

    1989-01-01

    Robot manipulators are designed to perform tasks which would otherwise be executed by a human operator. No manipulator can even approach the speed and accuracy with which humans execute these tasks. But manipulators have the capability to exceed human ability in one particular area: strength. Through any reasonable observation and experience, the human's ability to perform a variety of physical tasks is limited not by his intelligence, but by his physical strength. If, in the appropriate environment, we can more closely integrate the mechanical power of a machine with intellectually driven human hand under the supervisory control of the human's intellect, we will then have a system which is superior to a loosely-integrated combination of a human and his fully automated robot as in the present day robotic systems. We must therefore develop a fundamental approach to the problem of this extending human mechanical power in certain environments. Extenders will be a class of robots worn by humans to increase human mechanical ability, while the wearer's intellect remains the central intelligent control system for manipulating the extender. The human body, in physical contact with the extender, exchanges information signals and power with the extender. Commands are transferred to the extender via the contact forces between the wearer and the extender as opposed to use of joystick (master arm), push-button or key-board to execute such commands that were used in previous man amplifiers. Instead, the operator becomes an integral part of the extender while executing the task. In this unique configuration the mechanical power transfer between the human and extender occurs in addition to information signal transfer. When the wearer uses the extender to touch and manipulate an object, the extender transfers to the wearer's hand, in feedback fashion, a scaled-down value of the actual external load which the extender is manipulating. This natural feedback force on the wearer's hand allows him to feel the scaled-down value of the external forces in the manipulations. Extenders can be utilized to maneuver very heavy loads in factories, shipyards, airports, and construction sites. In some instances, for example, extenders can replace forklifts. The experimental results for a prototype extender are discussed.

  16. Sensor-based fine telemanipulation for space robotics

    NASA Technical Reports Server (NTRS)

    Andrenucci, M.; Bergamasco, M.; Dario, P.

    1989-01-01

    The control of a multifingered hand slave in order to accurately exert arbitrary forces and impart small movements to a grasped object is, at present, a knotty problem in teleoperation. Although a number of articulated robotic hands have been proposed in the recent past for dexterous manipulation in autonomous robots, the possible use of such hands as slaves in teleoperated manipulation is hindered by the present lack of sensors in those hands, and (even if those sensors were available) by the inherent difficulty of transmitting to the master operator the complex sensations elicited by such sensors at the slave level. An analysis of different problems related to sensor-based telemanipulation is presented. The general sensory systems requirements for dexterous slave manipulators are pointed out and the description of a practical sensory system set-up for the developed robotic system is presented. The problem of feeding back to the human master operator stimuli that can be interpreted by his central nervous system as originated during real dexterous manipulation is then considered. Finally, some preliminary work aimed at developing an instrumented glove designed purposely for commanding the master operation and incorporating Kevlar tendons and tension sensors, is discussed.

  17. Robotics On-Board Trainer (ROBoT)

    NASA Technical Reports Server (NTRS)

    Johnson, Genevieve; Alexander, Greg

    2013-01-01

    ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.

  18. Speed-constrained three-axes attitude control using kinematic steering

    NASA Astrophysics Data System (ADS)

    Schaub, Hanspeter; Piggott, Scott

    2018-06-01

    Spacecraft attitude control solutions typically are torque-level algorithms that simultaneously control both the attitude and angular velocity tracking errors. In contrast, robotic control solutions are kinematic steering commands where rates are treated as the control variable, and a servo-tracking control subsystem is present to achieve the desired control rates. In this paper kinematic attitude steering controls are developed where an outer control loop establishes a desired angular response history to a tracking error, and an inner control loop tracks the commanded body angular rates. The overall stability relies on the separation principle of the inner and outer control loops which must have sufficiently different response time scales. The benefit is that the outer steering law response can be readily shaped to a desired behavior, such as limiting the approach angular velocity when a large tracking error is corrected. A Modified Rodrigues Parameters implementation is presented that smoothly saturates the speed response. A robust nonlinear body rate servo loop is developed which includes integral feedback. This approach provides a convenient modular framework that makes it simple to interchange outer and inner control loops to readily setup new control implementations. Numerical simulations illustrate the expected performance for an aggressive reorientation maneuver subject to an unknown external torque.

  19. 2017 Global Explosive Ordnance Disposal (EOD) Symposium and Exhibition. Held in North Bethesda, MD on 8-9 August 2017.

    DTIC Science & Technology

    2017-08-09

    Commander, Israeli National Police Bomb Squad, Senior CIED Analyst & Author, Mobius Reports 9:00 AM - 6:30 PM Exhibit Hall Open Salons A-E 9:30 AM...Operation Inherent Resolve • COL Frank Davis, USA, Commander, 71st EOD Group 9:00 AM - 9:45 AM Belgium Bombing of 22 March 2016 Briefing • Commander...SYNEXXUS 201 United States Bomb Technician Association 202 55th Ordnance Company (EOD) 203 RE2 Robotics 204 W.S. Darley & Company 207 Roboteam Inc. 210

  20. A simple behaviour provides accuracy and flexibility in odour plume tracking--the robotic control of sensory-motor coupling in silkmoths.

    PubMed

    Ando, Noriyasu; Kanzaki, Ryohei

    2015-12-01

    Odour plume tracking is an essential behaviour for animal survival. A fundamental strategy for this is to move upstream and then across-stream. Male silkmoths, Bombyx mori, display this strategy as a pre-programmed sequential behaviour. They walk forward (surge) in response to the female sex pheromone and perform a zigzagging 'mating dance'. Though pre-programmed, the surge direction is modulated by bilateral olfactory input and optic flow. However, the nature of the interaction between these two sensory modalities and contribution of the resultant motor command to localizing an odour source are still unknown. We evaluated the ability of the silkmoth to localize an odour source under conditions of disturbed sensory-motor coupling, using a silkmoth-driven mobile robot. The significance of the bilateral olfaction of the moth was confirmed by inverting the olfactory input to the antennae, or its motor output. Inversion of the motor output induced consecutive circling, which was inhibited by covering the visual field of the moth. This suggests that the corollary discharge from the motor command and the reafference of self-generated optic flow generate compensatory signals to guide the surge accurately. Additionally, after inverting the olfactory input, the robot successfully tracked the odour plume by using a combination of behaviours. These results indicate that accurate guidance of the reflexive surge by integrating bilateral olfactory and visual information with innate pre-programmed behaviours increases the flexibility to track an odour plume even under disturbed circumstances. © 2015. Published by The Company of Biologists Ltd.

  1. ISS Expedition 18 Sandra Magnus at Robotics Work Station (RWS)

    NASA Image and Video Library

    2008-12-05

    ISS018-E-010555 (5 Dec. 2008) --- Astronaut Sandra Magnus, Expedition 18 flight engineer, operates the Canadarm2 from the robotics work station in the Destiny laboratory of the International Space Station. Using the station's robotic arm, Magnus and astronaut Michael Fincke (out of frame), commander, relocated the ESP-3 from the Mobile Base System back to the Cargo Carrier Attachment System on the P3 truss. The ESP-3 spare parts platform was temporarily parked on the MBS to clear the path for the spacewalks during STS-126.

  2. ISS Expedition 18 Robotics Work Station (RWS) in the US Laboratory

    NASA Image and Video Library

    2008-12-05

    ISS018-E-010564 (5 Dec. 2008) --- Astronaut Michael Fincke, Expedition 18 commander, uses a computer at the robotics work station in the Destiny laboratory of the International Space Station. Using the station's robotic arm, Fincke and astronaut Sandra Magnus (out of frame), flight engineer, relocated the ESP-3 from the Mobile Base System back to the Cargo Carrier Attachment System on the P3 truss. The ESP-3 spare parts platform was temporarily parked on the MBS to clear the path for the spacewalks during STS-126.

  3. Regenerative patterning in Swarm Robots: mutual benefits of research in robotics and stem cell biology.

    PubMed

    Rubenstein, Michael; Sai, Ying; Chuong, Cheng-Ming; Shen, Wei-Min

    2009-01-01

    This paper presents a novel perspective of Robotic Stem Cells (RSCs), defined as the basic non-biological elements with stem cell like properties that can self-reorganize to repair damage to their swarming organization. Self here means that the elements can autonomously decide and execute their actions without requiring any preset triggers, commands, or help from external sources. We develop this concept for two purposes. One is to develop a new theory for self-organization and self-assembly of multi-robots systems that can detect and recover from unforeseen errors or attacks. This self-healing and self-regeneration is used to minimize the compromise of overall function for the robot team. The other is to decipher the basic algorithms of regenerative behaviors in multi-cellular animal models, so that we can understand the fundamental principles used in the regeneration of biological systems. RSCs are envisioned to be basic building elements for future systems that are capable of self-organization, self-assembly, self-healing and self-regeneration. We first discuss the essential features of biological stem cells for such a purpose, and then propose the functional requirements of robotic stem cells with properties equivalent to gene controller, program selector and executor. We show that RSCs are a novel robotic model for scalable self-organization and self-healing in computer simulations and physical implementation. As our understanding of stem cells advances, we expect that future robots will be more versatile, resilient and complex, and such new robotic systems may also demand and inspire new knowledge from stem cell biology and related fields, such as artificial intelligence and tissue engineering.

  4. Regenerative patterning in Swarm Robots: mutual benefits of research in robotics and stem cell biology

    PubMed Central

    RUBENSTEIN, MICHAEL; SAI, YING; CHUONG, CHENG-MING; SHEN, WEI-MIN

    2010-01-01

    This paper presents a novel perspective of Robotic Stem Cells (RSCs), defined as the basic non-biological elements with stem cell like properties that can self-reorganize to repair damage to their swarming organization. “Self” here means that the elements can autonomously decide and execute their actions without requiring any preset triggers, commands, or help from external sources. We develop this concept for two purposes. One is to develop a new theory for self-organization and self-assembly of multi-robots systems that can detect and recover from unforeseen errors or attacks. This self-healing and self-regeneration is used to minimize the compromise of overall function for the robot team. The other is to decipher the basic algorithms of regenerative behaviors in multi-cellular animal models, so that we can understand the fundamental principles used in the regeneration of biological systems. RSCs are envisioned to be basic building elements for future systems that are capable of self-organization, self-assembly, self-healing and self-regeneration. We first discuss the essential features of biological stem cells for such a purpose, and then propose the functional requirements of robotic stem cells with properties equivalent to gene controller, program selector and executor. We show that RSCs are a novel robotic model for scalable self-organization and self-healing in computer simulations and physical implementation. As our understanding of stem cells advances, we expect that future robots will be more versatile, resilient and complex, and such new robotic systems may also demand and inspire new knowledge from stem cell biology and related fields, such as artificial intelligence and tissue engineering. PMID:19557691

  5. Robotic Laser Coating Removal System

    DTIC Science & Technology

    2008-07-01

    Materiel Command IRR Internal Rate of Return JTP Joint Test Protocol JTR Joint Test Report LARPS Large Area Robotic Paint Stripping LASER Light...use of laser paint stripping systems is applicable to depainting activities on large off-aircraft components and weapons systems for the Air Force...The use of laser paint stripping systems is applicable to depainting activities on large off-aircraft components and weapons systems for the Air

  6. How the type of input function affects the dynamic response of conducting polymer actuators

    NASA Astrophysics Data System (ADS)

    Xiang, Xingcan; Alici, Gursel; Mutlu, Rahim; Li, Weihua

    2014-10-01

    There has been a growing interest in smart actuators typified by conducting polymer actuators, especially in their (i) fabrication, modeling and control with minimum external data and (ii) applications in bio-inspired devices, robotics and mechatronics. Their control is a challenging research problem due to the complex and nonlinear properties of these actuators, which cannot be predicted accurately. Based on an input-shaping technique, we propose a new method to improve the conducting polymer actuators’ command-following ability, while minimizing their electric power consumption. We applied four input functions with smooth characteristics to a trilayer conducting polymer actuator to experimentally evaluate its command-following ability under an open-loop control strategy and a simulated feedback control strategy, and, more importantly, to quantify how the type of input function affects the dynamic response of this class of actuators. We have found that the four smooth inputs consume less electrical power than sharp inputs such as a step input with discontinuous higher-order derivatives. We also obtained an improved transient response performance from the smooth inputs, especially under the simulated feedback control strategy, which we have proposed previously [X Xiang, R Mutlu, G Alici, and W Li, 2014 “Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization’, Journal of Smart Materials and Structure, 23]. The idea of using a smooth input command, which results in lower power consumption and better control performance, can be extended to other smart actuators. Consuming less electrical energy or power will have a direct effect on enhancing the operational life of these actuators.

  7. A GPU-accelerated cortical neural network model for visually guided robot navigation.

    PubMed

    Beyeler, Michael; Oros, Nicolas; Dutt, Nikil; Krichmar, Jeffrey L

    2015-12-01

    Humans and other terrestrial animals use vision to traverse novel cluttered environments with apparent ease. On one hand, although much is known about the behavioral dynamics of steering in humans, it remains unclear how relevant perceptual variables might be represented in the brain. On the other hand, although a wealth of data exists about the neural circuitry that is concerned with the perception of self-motion variables such as the current direction of travel, little research has been devoted to investigating how this neural circuitry may relate to active steering control. Here we present a cortical neural network model for visually guided navigation that has been embodied on a physical robot exploring a real-world environment. The model includes a rate based motion energy model for area V1, and a spiking neural network model for cortical area MT. The model generates a cortical representation of optic flow, determines the position of objects based on motion discontinuities, and combines these signals with the representation of a goal location to produce motor commands that successfully steer the robot around obstacles toward the goal. The model produces robot trajectories that closely match human behavioral data. This study demonstrates how neural signals in a model of cortical area MT might provide sufficient motion information to steer a physical robot on human-like paths around obstacles in a real-world environment, and exemplifies the importance of embodiment, as behavior is deeply coupled not only with the underlying model of brain function, but also with the anatomical constraints of the physical body it controls. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Microsurgical robotic system for the deep surgical field: development of a prototype and feasibility studies in animal and cadaveric models.

    PubMed

    Morita, Akio; Sora, Shigeo; Mitsuishi, Mamoru; Warisawa, Shinichi; Suruman, Katopo; Asai, Daisuke; Arata, Junpei; Baba, Shoichi; Takahashi, Hidechika; Mochizuki, Ryo; Kirino, Takaaki

    2005-08-01

    To enhance the surgeon's dexterity and maneuverability in the deep surgical field, the authors developed a master-slave microsurgical robotic system. This concept and the results of preliminary experiments are reported in this paper. The system has a master control unit, which conveys motion commands in six degrees of freedom (X, Y, and Z directions; rotation; tip flexion; and grasping) to two arms. The slave manipulator has a hanging base with an additional six degrees of freedom; it holds a motorized operating unit with two manipulators (5 mm in diameter, 18 cm in length). The accuracy of the prototype in both shallow and deep surgical fields was compared with routine freehand microsurgery. Closure of a partial arteriotomy and complete end-to-end anastomosis of the carotid artery (CA) in the deep operative field were performed in 20 Wistar rats. Three routine surgical procedures were also performed in cadavers. The accuracy of pointing with the nondominant hand in the deep surgical field was significantly improved through the use of robotics. The authors successfully closed the partial arteriotomy and completely anastomosed the rat CAs in the deep surgical field. The time needed for stitching was significantly shortened over the course of the first 10 rat experiments. The robotic instruments also moved satisfactorily in cadavers, but the manipulators still need to be smaller to fit into the narrow intracranial space. Computer-controlled surgical manipulation will be an important tool for neurosurgery, and preliminary experiments involving this robotic system demonstrate its promising maneuverability.

  9. Telerobotic Excavator Designed to Compete in NASA's Lunabotics Mining Competition

    NASA Technical Reports Server (NTRS)

    Nash, Rodney; Santin, Cara; Yousef, Ahmed; Nguyen, Thien; Helferty, John; Pillapakkam, Shriram

    2011-01-01

    The second annual NASA Lunabotics Mining competition is to be held in May 23-28, 2011. The goal of the competition is for teams of university level students to design, build, test and compete with a fully integrated lunar excavator on a simulated lunar surface. Our team, named Lunar Solutions I, will be representing Temple University's College of Engineering in the competition. The team's main goal was to build a robot which is able to compete with other teams, and ultimately win the competition. The main challenge of the competition was to build a wireless robot that can excavate and collect a minimum of 10 kilograms of the regolith material within 15 minutes. The robot must also be designed to operate in conditions similar to those found on the lunar surface. The design of the lunar excavator is constrained by a set of requirements determined by NASA and detailed in the competition's rulebook. The excavator must have the ability to communicate with the "main base" wirelessly, and over a Wi-Fi network. Human operators are located at a remote site approximately 60 meters away from the simulated lunar surface upon which the robot must excavate the lunar regolith surface. During the competition, the robot will operate in a separate area from the control room in an area referred to as the "Lunarena." From the control room, the operators will have to control the robot using visual feedback from cameras placed both within the arena and on the robot. Using this visual feedback the human operators control the robots movement using both keyboard and joystick commands. In order to place in the competition, a minimum of 10 kg of regolith material has to be excavated, collected, and dumped into a specific location. For that reason, the robot must be provided with an effective and powerful excavation system. Our excavator uses tracks for the drive system. After performing extensive research and trade studies, we concluded that tracks would be the most effective method for transporting the excavator. When designing the excavation system, we analyzed several design options from the previous year's competition. We decided to use a front loader to collect the material, rather than a conveyer belt system or auger. Many of the designs from last year's competition used a conveyer belt mechanism to mine regolith and dump it into a temporary storage bin place on the robot. Using the front end loader approach allowed us to combine the scooping system and storage unit, which meant that the excavation system required less space.

  10. Integrating laboratory robots with analytical instruments--must it really be so difficult?

    PubMed

    Kramer, G W

    1990-09-01

    Creating a reliable system from discrete laboratory instruments is often a task fraught with difficulties. While many modern analytical instruments are marvels of detection and data handling, attempts to create automated analytical systems incorporating such instruments are often frustrated by their human-oriented control structures and their egocentricity. The laboratory robot, while fully susceptible to these problems, extends such compatibility issues to the physical dimensions involving sample interchange, manipulation, and event timing. The workcell concept was conceived to describe the procedure and equipment necessary to carry out a single task during sample preparation. This notion can be extended to organize all operations in an automated system. Each workcell, no matter how complex its local repertoire of functions, must be minimally capable of accepting information (commands, data), returning information on demand (status, results), and being started, stopped, and reset by a higher level device. Even the system controller should have a mode where it can be directed by instructions from a higher level.

  11. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  12. A supplementary system for a brain-machine interface based on jaw artifacts for the bidimensional control of a robotic arm.

    PubMed

    Costa, Álvaro; Hortal, Enrique; Iáñez, Eduardo; Azorín, José M

    2014-01-01

    Non-invasive Brain-Machine Interfaces (BMIs) are being used more and more these days to design systems focused on helping people with motor disabilities. Spontaneous BMIs translate user's brain signals into commands to control devices. On these systems, by and large, 2 different mental tasks can be detected with enough accuracy. However, a large training time is required and the system needs to be adjusted on each session. This paper presents a supplementary system that employs BMI sensors, allowing the use of 2 systems (the BMI system and the supplementary system) with the same data acquisition device. This supplementary system is designed to control a robotic arm in two dimensions using electromyographical (EMG) signals extracted from the electroencephalographical (EEG) recordings. These signals are voluntarily produced by users clenching their jaws. EEG signals (with EMG contributions) were registered and analyzed to obtain the electrodes and the range of frequencies which provide the best classification results for 5 different clenching tasks. A training stage, based on the 2-dimensional control of a cursor, was designed and used by the volunteers to get used to this control. Afterwards, the control was extrapolated to a robotic arm in a 2-dimensional workspace. Although the training performed by volunteers requires 70 minutes, the final results suggest that in a shorter period of time (45 min), users should be able to control the robotic arm in 2 dimensions with their jaws. The designed system is compared with a similar 2-dimensional system based on spontaneous BMIs, and our system shows faster and more accurate performance. This is due to the nature of the control signals. Brain potentials are much more difficult to control than the electromyographical signals produced by jaw clenches. Additionally, the presented system also shows an improvement in the results compared with an electrooculographic system in a similar environment.

  13. A Supplementary System for a Brain-Machine Interface Based on Jaw Artifacts for the Bidimensional Control of a Robotic Arm

    PubMed Central

    Costa, Álvaro; Hortal, Enrique; Iáñez, Eduardo; Azorín, José M.

    2014-01-01

    Non-invasive Brain-Machine Interfaces (BMIs) are being used more and more these days to design systems focused on helping people with motor disabilities. Spontaneous BMIs translate user's brain signals into commands to control devices. On these systems, by and large, 2 different mental tasks can be detected with enough accuracy. However, a large training time is required and the system needs to be adjusted on each session. This paper presents a supplementary system that employs BMI sensors, allowing the use of 2 systems (the BMI system and the supplementary system) with the same data acquisition device. This supplementary system is designed to control a robotic arm in two dimensions using electromyographical (EMG) signals extracted from the electroencephalographical (EEG) recordings. These signals are voluntarily produced by users clenching their jaws. EEG signals (with EMG contributions) were registered and analyzed to obtain the electrodes and the range of frequencies which provide the best classification results for 5 different clenching tasks. A training stage, based on the 2-dimensional control of a cursor, was designed and used by the volunteers to get used to this control. Afterwards, the control was extrapolated to a robotic arm in a 2-dimensional workspace. Although the training performed by volunteers requires 70 minutes, the final results suggest that in a shorter period of time (45 min), users should be able to control the robotic arm in 2 dimensions with their jaws. The designed system is compared with a similar 2-dimensional system based on spontaneous BMIs, and our system shows faster and more accurate performance. This is due to the nature of the control signals. Brain potentials are much more difficult to control than the electromyographical signals produced by jaw clenches. Additionally, the presented system also shows an improvement in the results compared with an electrooculographic system in a similar environment. PMID:25390372

  14. Quadcopter Control Using Speech Recognition

    NASA Astrophysics Data System (ADS)

    Malik, H.; Darma, S.; Soekirno, S.

    2018-04-01

    This research reported a comparison from a success rate of speech recognition systems that used two types of databases they were existing databases and new databases, that were implemented into quadcopter as motion control. Speech recognition system was using Mel frequency cepstral coefficient method (MFCC) as feature extraction that was trained using recursive neural network method (RNN). MFCC method was one of the feature extraction methods that most used for speech recognition. This method has a success rate of 80% - 95%. Existing database was used to measure the success rate of RNN method. The new database was created using Indonesian language and then the success rate was compared with results from an existing database. Sound input from the microphone was processed on a DSP module with MFCC method to get the characteristic values. Then, the characteristic values were trained using the RNN which result was a command. The command became a control input to the single board computer (SBC) which result was the movement of the quadcopter. On SBC, we used robot operating system (ROS) as the kernel (Operating System).

  15. Smart Hand For Manipulators

    NASA Astrophysics Data System (ADS)

    Fiorini, Paolo

    1987-10-01

    Sensor based, computer controlled end effectors for mechanical arms are receiving more and more attention in the robotics industry, because commonly available grippers are only adequate for simple pick and place tasks. This paper describes the current status of the research at JPL on a smart hand for a Puma 560 robot arm. The hand is a self contained, autonomous system, capable of executing high level commands from a supervisory computer. The mechanism consists of parallel fingers, powered by a DC motor, and controlled by a microprocessor embedded in the hand housing. Special sensors are integrated in the hand for measuring the grasp force of the fingers, and for measuring forces and torques applied between the arm and the surrounding environment. Fingers can be exercised under position, velocity and force control modes. The single-chip microcomputer in the hand executes the tasks of communication, data acquisition and sensor based motor control, with a sample cycle of 2 ms and a transmission rate of 9600 baud. The smart hand described in this paper represents a new development in the area of end effector design because of its multi-functionality and autonomy. It will also be a versatile test bed for experimenting with advanced control schemes for dexterous manipulation.

  16. Evaluation of Teaching Signals for Motor Control in the Cerebellum during Real-World Robot Application.

    PubMed

    Pinzon Morales, Ruben Dario; Hirata, Yutaka

    2016-12-20

    Motor learning in the cerebellum is believed to entail plastic changes at synapses between parallel fibers and Purkinje cells, induced by the teaching signal conveyed in the climbing fiber (CF) input. Despite the abundant research on the cerebellum, the nature of this signal is still a matter of debate. Two types of movement error information have been proposed to be plausible teaching signals: sensory error (SE) and motor command error (ME); however, their plausibility has not been tested in the real world. Here, we conducted a comparison of different types of CF teaching signals in real-world engineering applications by using a realistic neuronal network model of the cerebellum. We employed a direct current motor (simple task) and a two-wheeled balancing robot (difficult task). We demonstrate that SE, ME or a linear combination of the two is sufficient to yield comparable performance in a simple task. When the task is more difficult, although SE slightly outperformed ME, these types of error information are all able to adequately control the robot. We categorize granular cells according to their inputs and the error signal revealing that different granule cells are preferably engaged for SE, ME or their combination. Thus, unlike previous theoretical and simulation studies that support either SE or ME, it is demonstrated for the first time in a real-world engineering application that both SE and ME are adequate as the CF teaching signal in a realistic computational cerebellar model, even when the control task is as difficult as stabilizing a two-wheeled balancing robot.

  17. Evaluation of Teaching Signals for Motor Control in the Cerebellum during Real-World Robot Application

    PubMed Central

    Pinzon Morales, Ruben Dario; Hirata, Yutaka

    2016-01-01

    Motor learning in the cerebellum is believed to entail plastic changes at synapses between parallel fibers and Purkinje cells, induced by the teaching signal conveyed in the climbing fiber (CF) input. Despite the abundant research on the cerebellum, the nature of this signal is still a matter of debate. Two types of movement error information have been proposed to be plausible teaching signals: sensory error (SE) and motor command error (ME); however, their plausibility has not been tested in the real world. Here, we conducted a comparison of different types of CF teaching signals in real-world engineering applications by using a realistic neuronal network model of the cerebellum. We employed a direct current motor (simple task) and a two-wheeled balancing robot (difficult task). We demonstrate that SE, ME or a linear combination of the two is sufficient to yield comparable performance in a simple task. When the task is more difficult, although SE slightly outperformed ME, these types of error information are all able to adequately control the robot. We categorize granular cells according to their inputs and the error signal revealing that different granule cells are preferably engaged for SE, ME or their combination. Thus, unlike previous theoretical and simulation studies that support either SE or ME, it is demonstrated for the first time in a real-world engineering application that both SE and ME are adequate as the CF teaching signal in a realistic computational cerebellar model, even when the control task is as difficult as stabilizing a two-wheeled balancing robot. PMID:27999381

  18. Tank-automotive robotics

    NASA Astrophysics Data System (ADS)

    Lane, Gerald R.

    1999-07-01

    To provide an overview of Tank-Automotive Robotics. The briefing will contain program overviews & inter-relationships and technology challenges of TARDEC managed unmanned and robotic ground vehicle programs. Specific emphasis will focus on technology developments/approaches to achieve semi- autonomous operation and inherent chassis mobility features. Programs to be discussed include: DemoIII Experimental Unmanned Vehicle (XUV), Tactical Mobile Robotics (TMR), Intelligent Mobility, Commanders Driver Testbed, Collision Avoidance, International Ground Robotics Competition (ICGRC). Specifically, the paper will discuss unique exterior/outdoor challenges facing the IGRC competing teams and the synergy created between the IGRC and ongoing DoD semi-autonomous Unmanned Ground Vehicle and DoT Intelligent Transportation System programs. Sensor and chassis approaches to meet the IGRC challenges and obstacles will be shown and discussed. Shortfalls in performance to meet the IGRC challenges will be identified.

  19. Electrical power technology for robotic planetary rovers

    NASA Technical Reports Server (NTRS)

    Bankston, C. P.; Shirbacheh, M.; Bents, D. J.; Bozek, J. M.

    1993-01-01

    Power technologies which will enable a range of robotic rover vehicle missions by the end of the 1990s and beyond are discussed. The electrical power system is the most critical system for reliability and life, since all other on board functions (mobility, navigation, command and data, communications, and the scientific payload instruments) require electrical power. The following are discussed: power generation, energy storage, power management and distribution, and thermal management.

  20. Cerebellar-inspired algorithm for adaptive control of nonlinear dielectric elastomer-based artificial muscle

    PubMed Central

    Assaf, Tareq; Rossiter, Jonathan M.; Porrill, John

    2016-01-01

    Electroactive polymer actuators are important for soft robotics, but can be difficult to control because of compliance, creep and nonlinearities. Because biological control mechanisms have evolved to deal with such problems, we investigated whether a control scheme based on the cerebellum would be useful for controlling a nonlinear dielectric elastomer actuator, a class of artificial muscle. The cerebellum was represented by the adaptive filter model, and acted in parallel with a brainstem, an approximate inverse plant model. The recurrent connections between the two allowed for direct use of sensory error to adjust motor commands. Accurate tracking of a displacement command in the actuator's nonlinear range was achieved by either semi-linear basis functions in the cerebellar model or semi-linear functions in the brainstem corresponding to recruitment in biological muscle. In addition, allowing transfer of training between cerebellum and brainstem as has been observed in the vestibulo-ocular reflex prevented the steady increase in cerebellar output otherwise required to deal with creep. The extensibility and relative simplicity of the cerebellar-based adaptive-inverse control scheme suggests that it is a plausible candidate for controlling this type of actuator. Moreover, its performance highlights important features of biological control, particularly nonlinear basis functions, recruitment and transfer of training. PMID:27655667

  1. A procedure concept for local reflex control of grasping

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Chang, Jeffrey

    1989-01-01

    An architecture is proposed for the control of robotic devices, and in particular of anthropomorphic hands, characterized by a hierarchical structure in which every level of the architecture contains data and control function with varying degree of abstraction. Bottom levels of the hierarchy interface directly with sensors and actuators, and process raw data and motor commands. Higher levels perform more symbolic types of tasks, such as application of boolean rules and general planning operations. Layers implementation has to be consistent with the type of operation and its requirements for real time control. It is proposed to implement the rule level with a Boolean Artificial Neural Network characterized by a response time sufficient for producing reflex corrective action at the actuator level.

  2. Phoenix Telemetry Processor

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice

    2013-01-01

    Phxtelemproc is a C/C++ based telemetry processing program that processes SFDU telemetry packets from the Telemetry Data System (TDS). It generates Experiment Data Records (EDRs) for several instruments including surface stereo imager (SSI); robotic arm camera (RAC); robotic arm (RA); microscopy, electrochemistry, and conductivity analyzer (MECA); and the optical microscope (OM). It processes both uncompressed and compressed telemetry, and incorporates unique subroutines for the following compression algorithms: JPEG Arithmetic, JPEG Huffman, Rice, LUT3, RA, and SX4. This program was in the critical path for the daily command cycle of the Phoenix mission. The products generated by this program were part of the RA commanding process, as well as the SSI, RAC, OM, and MECA image and science analysis process. Its output products were used to advance science of the near polar regions of Mars, and were used to prove that water is found in abundance there. Phxtelemproc is part of the MIPL (Multi-mission Image Processing Laboratory) system. This software produced Level 1 products used to analyze images returned by in situ spacecraft. It ultimately assisted in operations, planning, commanding, science, and outreach.

  3. Explanation Capabilities for Behavior-Based Robot Control

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L.

    2012-01-01

    A recent study that evaluated issues associated with remote interaction with an autonomous vehicle within the framework of grounding found that missing contextual information led to uncertainty in the interpretation of collected data, and so introduced errors into the command logic of the vehicle. As the vehicles became more autonomous through the activation of additional capabilities, more errors were made. This is an inefficient use of the platform, since the behavior of remotely located autonomous vehicles didn't coincide with the "mental models" of human operators. One of the conclusions of the study was that there should be a way for the autonomous vehicles to describe what action they choose and why. Robotic agents with enough self-awareness to dynamically adjust the information conveyed back to the Operations Center based on a detail level component analysis of requests could provide this description capability. One way to accomplish this is to map the behavior base of the robot into a formal mathematical framework called a cost-calculus. A cost-calculus uses composition operators to build up sequences of behaviors that can then be compared to what is observed using well-known inference mechanisms.

  4. International Space Station (ISS)

    NASA Image and Video Library

    2002-06-05

    Aboard the Space Shuttle Orbiter Endeavour, the STS-111 mission was launched on June 5, 2002 at 5:22 pm EDT from Kennedy's launch pad. On board were the STS-111 and Expedition Five crew members. Astronauts Kenneth D. Cockrell, commander; Paul S. Lockhart, pilot, and mission specialists Franklin R. Chang-Diaz and Philippe Perrin were the STS-111 crew members. Expedition Five crew members included Cosmonaut Valeri G. Korzun, commander, Astronaut Peggy A. Whitson and Cosmonaut Sergei Y. Treschev, flight engineers. Three space walks enabled the STS-111 crew to accomplish mission objectives: the delivery and installation of a new platform for the ISS robotic arm, the Mobile Base System (MBS) which is an important part of the Station's Mobile Servicing System allowing the robotic arm to travel the length of the Station; the replacement of a wrist roll joint on the Station's robotic arm; and unloading supplies and science experiments from the Leonardo Multi-Purpose Logistics Module, which made its third trip to the orbital outpost. Landing on June 19, 2002, the 14-day STS-111 mission was the 14th Shuttle mission to visit the ISS.

  5. Intelligent control and cooperation for mobile robots

    NASA Astrophysics Data System (ADS)

    Stingu, Petru Emanuel

    The topic discussed in this work addresses the current research being conducted at the Automation & Robotics Research Institute in the areas of UAV quadrotor control and heterogenous multi-vehicle cooperation. Autonomy can be successfully achieved by a robot under the following conditions: the robot has to be able to acquire knowledge about the environment and itself, and it also has to be able to reason under uncertainty. The control system must react quickly to immediate challenges, but also has to slowly adapt and improve based on accumulated knowledge. The major contribution of this work is the transfer of the ADP algorithms from the purely theoretical environment to the complex real-world robotic platforms that work in real-time and in uncontrolled environments. Many solutions are adopted from those present in nature because they have been proven to be close to optimal in very different settings. For the control of a single platform, reinforcement learning algorithms are used to design suboptimal controllers for a class of complex systems that can be conceptually split in local loops with simpler dynamics and relatively weak coupling to the rest of the system. Optimality is enforced by having a global critic but the curse of dimensionality is avoided by using local actors and intelligent pre-processing of the information used for learning the optimal controllers. The system model is used for constructing the structure of the control system, but on top of that the adaptive neural networks that form the actors use the knowledge acquired during normal operation to get closer to optimal control. In real-world experiments, efficient learning is a strong requirement for success. This is accomplished by using an approximation of the system model to focus the learning for equivalent configurations of the state space. Due to the availability of only local data for training, neural networks with local activation functions are implemented. For the control of a formation of robots subjected to dynamic communication constraints, game theory is used in addition to reinforcement learning. The nodes maintain an extra set of state variables about all the other nodes that they can communicate to. The more important are trust and predictability. They are a way to incorporate knowledge acquired in the past into the control decisions taken by each node. The trust variable provides a simple mechanism for the implementation of reinforcement learning. For robot formations, potential field based control algorithms are used to generate the control commands. The formation structure changes due to the environment and due to the decisions of the nodes. It is a problem of building a graph and coalitions by having distributed decisions but still reaching an optimal behavior globally.

  6. EVA Robotic Assistant Project: Platform Attitude Prediction

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin M.

    2003-01-01

    The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.

  7. Method and associated apparatus for capturing, servicing, and de-orbiting earth satellites using robotics

    NASA Technical Reports Server (NTRS)

    Cepollina, Frank J. (Inventor); Corbo, James E. (Inventor); Burns, Richard D. (Inventor); Jedhrich, Nicholas M. (Inventor); Holz, Jill M. (Inventor)

    2009-01-01

    This invention is a method and supporting apparatus for autonomously capturing, servicing and de-orbiting a free-flying spacecraft, such as a satellite, using robotics. The capture of the spacecraft includes the steps of optically seeking and ranging the satellite using LIDAR, and matching tumble rates, rendezvousing and berthing with the satellite. Servicing of the spacecraft may be done using supervised autonomy, which is allowing a robot to execute a sequence of instructions without intervention from a remote human-occupied location. These instructions may be packaged at the remote station in a script and uplinked to the robot for execution upon remote command giving authority to proceed. Alternately, the instructions may be generated by Artificial Intelligence (AI) logic onboard the robot. In either case, the remote operator maintains the ability to abort an instruction or script at any time as well as the ability to intervene using manual override to teleoperate the robot.

  8. Cutaneous Feedback of Fingertip Deformation and Vibration for Palpation in Robotic Surgery.

    PubMed

    Pacchierotti, Claudio; Prattichizzo, Domenico; Kuchenbecker, Katherine J

    2016-02-01

    Despite its expected clinical benefits, current teleoperated surgical robots do not provide the surgeon with haptic feedback largely because grounded forces can destabilize the system's closed-loop controller. This paper presents an alternative approach that enables the surgeon to feel fingertip contact deformations and vibrations while guaranteeing the teleoperator's stability. We implemented our cutaneous feedback solution on an Intuitive Surgical da Vinci Standard robot by mounting a SynTouch BioTac tactile sensor to the distal end of a surgical instrument and a custom cutaneous display to the corresponding master controller. As the user probes the remote environment, the contact deformations, dc pressure, and ac pressure (vibrations) sensed by the BioTac are directly mapped to input commands for the cutaneous device's motors using a model-free algorithm based on look-up tables. The cutaneous display continually moves, tilts, and vibrates a flat plate at the operator's fingertip to optimally reproduce the tactile sensations experienced by the BioTac. We tested the proposed approach by having eighteen subjects use the augmented da Vinci robot to palpate a heart model with no haptic feedback, only deformation feedback, and deformation plus vibration feedback. Fingertip deformation feedback significantly improved palpation performance by reducing the task completion time, the pressure exerted on the heart model, and the subject's absolute error in detecting the orientation of the embedded plastic stick. Vibration feedback significantly improved palpation performance only for the seven subjects who dragged the BioTac across the model, rather than pressing straight into it.

  9. iss050e059529

    NASA Image and Video Library

    2017-03-24

    iss050e059529 (03/24/2017) --- Flight Engineer Thomas Pesquet of ESA (European Space Agency) is seen performing maintenance on the Dextre robot during a spacewalk. Pesquet and Expedition 50 Commander Shane Kimbrough of NASA conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.

  10. Speech and gesture interfaces for squad-level human-robot teaming

    NASA Astrophysics Data System (ADS)

    Harris, Jonathan; Barber, Daniel

    2014-06-01

    As the military increasingly adopts semi-autonomous unmanned systems for military operations, utilizing redundant and intuitive interfaces for communication between Soldiers and robots is vital to mission success. Currently, Soldiers use a common lexicon to verbally and visually communicate maneuvers between teammates. In order for robots to be seamlessly integrated within mixed-initiative teams, they must be able to understand this lexicon. Recent innovations in gaming platforms have led to advancements in speech and gesture recognition technologies, but the reliability of these technologies for enabling communication in human robot teaming is unclear. The purpose for the present study is to investigate the performance of Commercial-Off-The-Shelf (COTS) speech and gesture recognition tools in classifying a Squad Level Vocabulary (SLV) for a spatial navigation reconnaissance and surveillance task. The SLV for this study was based on findings from a survey conducted with Soldiers at Fort Benning, GA. The items of the survey focused on the communication between the Soldier and the robot, specifically in regards to verbally instructing them to execute reconnaissance and surveillance tasks. Resulting commands, identified from the survey, were then converted to equivalent arm and hand gestures, leveraging existing visual signals (e.g. U.S. Army Field Manual for Visual Signaling). A study was then run to test the ability of commercially available automated speech recognition technologies and a gesture recognition glove to classify these commands in a simulated intelligence, surveillance, and reconnaissance task. This paper presents classification accuracy of these devices for both speech and gesture modalities independently.

  11. Development and Control of the Naval Postgraduate School Planar Autonomous Docking Simulator (NPADS)

    NASA Astrophysics Data System (ADS)

    Porter, Robert D.

    2002-09-01

    The objective of this thesis was to design, construct and develop the initial autonomous control algorithm for the NPS Planar Autonomous Docking Simulator (NPADS) The effort included hardware design, fabrication, installation and integration; mass property determination; and the development and testing of control laws utilizing MATLAB and Simulink for modeling and LabView for NPADS control, The NPADS vehicle uses air pads and a granite table to simulate a 2-D, drag-free, zero-g space environment, It is a completely self-contained vehicle equipped with eight cold-gas, bang-bang type thrusters and a reaction wheel for motion control, A 'star sensor' CCD camera locates the vehicle on the table while a color CCD docking camera and two robotic arms will locate and dock with a target vehicle, The on-board computer system leverages PXI technology and a single source, simplifying systems integration, The vehicle is powered by two lead-acid batteries for completely autonomous operation, A graphical user interface and wireless Ethernet enable the user to command and monitor the vehicle from a remote command and data acquisition computer. Two control algorithms were developed and allow the user to either control the thrusters and reaction wheel manually or simply specify a desired location and rotation angle,

  12. Mapping From an Instrumented Glove to a Robot Hand

    NASA Technical Reports Server (NTRS)

    Goza, Michael

    2005-01-01

    An algorithm has been developed to solve the problem of mapping from (1) a glove instrumented with joint-angle sensors to (2) an anthropomorphic robot hand. Such a mapping is needed to generate control signals to make the robot hand mimic the configuration of the hand of a human attempting to control the robot. The mapping problem is complicated by uncertainties in sensor locations caused by variations in sizes and shapes of hands and variations in the fit of the glove. The present mapping algorithm is robust in the face of these uncertainties, largely because it includes a calibration sub-algorithm that inherently adapts the mapping to the specific hand and glove, without need for measuring the hand and without regard for goodness of fit. The algorithm utilizes a forward-kinematics model of the glove derived from documentation provided by the manufacturer of the glove. In this case, forward-kinematics model signifies a mathematical model of the glove fingertip positions as functions of the sensor readings. More specifically, given the sensor readings, the forward-kinematics model calculates the glove fingertip positions in a Cartesian reference frame nominally attached to the palm. The algorithm also utilizes an inverse-kinematics model of the robot hand. In this case, inverse-kinematics model signifies a mathematical model of the robot finger-joint angles as functions of the robot fingertip positions. Again, more specifically, the inverse-kinematics model calculates the finger-joint commands needed to place the fingertips at specified positions in a Cartesian reference frame that is attached to the palm of the robot hand and that nominally corresponds to the Cartesian reference frame attached to the palm of the glove. Initially, because of the aforementioned uncertainties, the glove fingertip positions calculated by the forwardkinematics model in the glove Cartesian reference frame cannot be expected to match the robot fingertip positions in the robot-hand Cartesian reference frame. A calibration must be performed to make the glove and robot-hand fingertip positions correspond more precisely. The calibration procedure involves a few simple hand poses designed to provide well-defined fingertip positions. One of the poses is a fist. In each of the other poses, a finger touches the thumb. The calibration subalgorithm uses the sensor readings from these poses to modify the kinematical models to make the two sets of fingertip positions agree more closely.

  13. Robust, Flexible Motion Control for the Mars Explorer Rovers

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Biesiadecki, Jeffrey

    2007-01-01

    The Mobility Flight Software, running on computers aboard the Mars Explorer Rover (MER) robotic vehicles Spirit and Opportunity, affords the robustness and flexibility of control to enable safe and effective operation of these vehicles in traversing natural terrain. It can make the vehicles perform specific maneuvers commanded from Earth, and/or can autonomously administer multiple aspects of mobility, including choice of motion, measurement of actual motion, and even selection of targets to be approached. Motion of a vehicle can be commanded by use of multiple layers of control, ranging from motor control at a low level, direct drive operations (e.g., motion along a circular arc, motion along a straight line, or turn in place) at an intermediate level to goal-position driving (that is, driving to a specified location) at a high level. The software can also perform high-level assessment of terrain and selection of safe paths across the terrain: this involves processing of the digital equivalent of a local traversability map generated from images acquired by stereoscopic pairs of cameras aboard the vehicles. Other functions of the software include interacting with the rest of the MER flight software and performing safety checks.

  14. Defining Soldier Intent in a Human-Robot Natural Language Interaction Context

    DTIC Science & Technology

    2017-10-01

    this burden on the human and expand the scope of human–robot operations, this project investigates fundamental research issues in the autonomous...attempted to devise a quantitative metric for the Shared Interpretation of Commander’s Intent (SICI). The authors’ background research indicated that...Another interesting set of results were the cases where the battalion and company commanders disagreed on the meaning of key terms, such as “delay”, which

  15. HERRO Mission to Mars Using Telerobotic Surface Exploration from Orbit

    NASA Technical Reports Server (NTRS)

    Oleson, Steven R.; Landis, Geoffrey A.; McGuire, Melissa L.; Schmidt, George R.

    2013-01-01

    This paper presents a concept for a human mission to Mars orbit that features direct robotic exploration of the planet s surface via teleoperation from orbit. This mission is a good example of Human Exploration using Real-time Robotic Operations (HERRO), an exploration strategy that refrains from sending humans to the surfaces of planets with large gravity wells. HERRO avoids the need for complex and expensive man-rated lander/ascent vehicles and surface systems. Additionally, the humans are close enough to the surface to effectively eliminate the two-way communication latency that constrains typical robotic space missions, thus allowing real-time command and control of surface operations and experiments by the crew. Through use of state-of-the-art telecommunications and robotics, HERRO provides the cognitive and decision-making advantages of having humans at the site of study for only a fraction of the cost of conventional human surface missions. It is very similar to how oceanographers and oil companies use telerobotic submersibles to work in inaccessible areas of the ocean, and represents a more expedient, near-term step prior to landing humans on Mars and other large planetary bodies. Results suggest that a single HERRO mission with six crew members could achieve the same exploratory and scientific return as three conventional crewed missions to the Mars surface.

  16. Graphical programming: A systems approach for telerobotic servicing of space assets

    NASA Technical Reports Server (NTRS)

    Pinkerton, James T.; Mcdonald, Michael J.; Palmquist, Robert D.; Patten, Richard

    1994-01-01

    Satellite servicing is in many ways analogous to subsea robotic servicing in the late 1970's. A cost effective, reliable, telerobotic capability had to be demonstrated before the oil companies invested money in deep water robot serviceable production facilities. In the same sense, aeronautic engineers will not design satellites for telerobotic servicing until such a quantifiable capability has been demonstrated. New space servicing systems will be markedly different than existing space robot systems. Past space manipulator systems, including the Space Shuttle's robot arm, have used master/slave technologies with poor fidelity, slow operating speeds and most importantly, in-orbit human operators. In contrast, new systems will be capable of precision operations, conducted at higher rates of speed, and be commanded via ground-control communication links. Challenge presented by this environment include achieving a mandated level of robustness and dependability, radiation hardening, minimum weight and power consumption, and a system which accommodates the inherent communication delay between the ground station and the satellite. There is also a need for a user interface which is easy to use, ensures collision free motions, and is capable of adjusting to an unknown workcell (for repair operations the condition of the satellite may not be known in advance). This paper describes the novel technologies required to deliver such a capability.

  17. Personal mobility and manipulation using robotics, artificial intelligence and advanced control.

    PubMed

    Cooper, Rory A; Ding, Dan; Grindle, Garrett G; Wang, Hongwu

    2007-01-01

    Recent advancements of technologies, including computation, robotics, machine learning, communication, and miniaturization technologies, bring us closer to futuristic visions of compassionate intelligent devices. The missing element is a basic understanding of how to relate human functions (physiological, physical, and cognitive) to the design of intelligent devices and systems that aid and interact with people. Our stakeholder and clinician consultants identified a number of mobility barriers that have been intransigent to traditional approaches. The most important physical obstacles are stairs, steps, curbs, doorways (doors), rough/uneven surfaces, weather hazards (snow, ice), crowded/cluttered spaces, and confined spaces. Focus group participants suggested a number of ways to make interaction simpler, including natural language interfaces such as the ability to say "I want a drink", a library of high level commands (open a door, park the wheelchair, ...), and a touchscreen interface with images so the user could point and use other gestures.

  18. Task sequence planning in a robot workcell using AND/OR nets

    NASA Technical Reports Server (NTRS)

    Cao, Tiehua; Sanderson, Arthur C.

    1991-01-01

    An approach to task sequence planning for a generalized robotic manufacturing or material handling workcell is described. Given the descriptions of the objects in this system and all feasible geometric relationships among these objects, an AND/OR net which describes the relationships of all feasible geometric states and associated feasibility criteria for net transitions is generated. This AND/OR net is mapped into a Petri net which incorporates all feasible sequences of operations. The resulting Petri net is shown to be bounded and have guaranteed properties of liveness, safeness, and reversibility. Sequences are found from the reachability tree of the Petri net. Feasibility criteria for net transitions may be used to generate an extended Petri net representation of lower level command sequences. The resulting Petri net representation may be used for on-line scheduling and control of the system of feasible sequences. A simulation example of the sequences is described.

  19. Simulation of Hazards and Poses for a Rocker-Bogie Rover

    NASA Technical Reports Server (NTRS)

    Backes, Paul; Norris, Jeffrey; Powell, Mark; Tharp, Gregory

    2004-01-01

    Provisions for specification of hazards faced by a robotic vehicle (rover) equipped with a rocker-bogie suspension, for prediction of collisions between the vehicle and the hazards, and for simulation of poses of the vehicle at selected positions on the terrain have been incorporated into software that simulates the movements of the vehicle on planned paths across the terrain. The software in question is that of the Web Interface for Telescience (WITS), selected aspects of which have been described in a number of prior NASA Tech Briefs articles. To recapitulate: The WITS is a system of computer software that enables scientists, located at geographically dispersed computer terminals connected to the World Wide Web, to command instrumented robotic vehicles (rovers) during exploration of Mars and perhaps eventually of other planets. The WITS also has potential for adaptation to terrestrial use in telerobotics and other applications that involve computer-based remote monitoring, supervision, control, and planning.

  20. Analytic and simulation studies on the use of torque-wheel actuators for the control of flexible robotic arms

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C.; Ghosh, Dave; Kenny, Sean

    1991-01-01

    This paper presents results of analytic and simulation studies to determine the effectiveness of torque-wheel actuators in suppressing the vibrations of two-link telerobotic arms with attached payloads. The simulations use a planar generic model of a two-link arm with a torque wheel at the free end. Parameters of the arm model are selected to be representative of a large space-based robotic arm of the same class as the Space Shuttle Remote Manipulator, whereas parameters of the torque wheel are selected to be similar to those of the Mini-Mast facility at the Langley Research Center. Results show that this class of torque-wheel can produce an oscillation of 2.5 cm peak-to-peak in the end point of the arm and that the wheel produces significantly less overshoot when the arm is issued an abrupt stop command from the telerobotic input station.

  1. Enhancing the effectiveness of human-robot teaming with a closed-loop system.

    PubMed

    Teo, Grace; Reinerman-Jones, Lauren; Matthews, Gerald; Szalma, James; Jentsch, Florian; Hancock, Peter

    2018-02-01

    With technological developments in robotics and their increasing deployment, human-robot teams are set to be a mainstay in the future. To develop robots that possess teaming capabilities, such as being able to communicate implicitly, the present study implemented a closed-loop system. This system enabled the robot to provide adaptive aid without the need for explicit commands from the human teammate, through the use of multiple physiological workload measures. Such measures of workload vary in sensitivity and there is large inter-individual variability in physiological responses to imposed taskload. Workload models enacted via closed-loop system should accommodate such individual variability. The present research investigated the effects of the adaptive robot aid vs. imposed aid on performance and workload. Results showed that adaptive robot aid driven by an individualized workload model for physiological response resulted in greater improvements in performance compared to aid that was simply imposed by the system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Control algorithm implementation for a redundant degree of freedom manipulator

    NASA Technical Reports Server (NTRS)

    Cohan, Steve

    1991-01-01

    This project's purpose is to develop and implement control algorithms for a kinematically redundant robotic manipulator. The manipulator is being developed concurrently by Odetics Inc., under internal research and development funding. This SBIR contract supports algorithm conception, development, and simulation, as well as software implementation and integration with the manipulator hardware. The Odetics Dexterous Manipulator is a lightweight, high strength, modular manipulator being developed for space and commercial applications. It has seven fully active degrees of freedom, is electrically powered, and is fully operational in 1 G. The manipulator consists of five self-contained modules. These modules join via simple quick-disconnect couplings and self-mating connectors which allow rapid assembly/disassembly for reconfiguration, transport, or servicing. Each joint incorporates a unique drive train design which provides zero backlash operation, is insensitive to wear, and is single fault tolerant to motor or servo amplifier failure. The sensing system is also designed to be single fault tolerant. Although the initial prototype is not space qualified, the design is well-suited to meeting space qualification requirements. The control algorithm design approach is to develop a hierarchical system with well defined access and interfaces at each level. The high level endpoint/configuration control algorithm transforms manipulator endpoint position/orientation commands to joint angle commands, providing task space motion. At the same time, the kinematic redundancy is resolved by controlling the configuration (pose) of the manipulator, using several different optimizing criteria. The center level of the hierarchy servos the joints to their commanded trajectories using both linear feedback and model-based nonlinear control techniques. The lowest control level uses sensed joint torque to close torque servo loops, with the goal of improving the manipulator dynamic behavior. The control algorithms are subjected to a dynamic simulation before implementation.

  3. Searching Dynamic Agents with a Team of Mobile Robots

    PubMed Central

    Juliá, Miguel; Gil, Arturo; Reinoso, Oscar

    2012-01-01

    This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach. PMID:23012519

  4. Searching dynamic agents with a team of mobile robots.

    PubMed

    Juliá, Miguel; Gil, Arturo; Reinoso, Oscar

    2012-01-01

    This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach.

  5. Baseline tests of an autonomous telerobotic system for assembly of space truss structures

    NASA Technical Reports Server (NTRS)

    Rhodes, Marvin D.; Will, Ralph W.; Quach, Coung

    1994-01-01

    Several proposed space missions include precision reflectors that are larger in diameter than any current or proposed launch vehicle. Most of these reflectors will require a truss structure to accurately position the reflector panels and these reflectors will likely require assembly in orbit. A research program has been conducted at the NASA Langley Research Center to develop the technology required for the robotic assembly of truss structures. The focus of this research has been on hardware concepts, computer software control systems, and operator interfaces necessary to perform supervised autonomous assembly. A special facility was developed and four assembly and disassembly tests of a 102-strut tetrahedral truss have been conducted. The test procedures were developed around traditional 'pick-and-place' robotic techniques that rely on positioning repeatability for successful operation. The data from two of the four tests were evaluated and are presented in this report. All operations in the tests were controlled by predefined sequences stored in a command file, and the operator intervened only when the system paused because of the failure of an actuator command. The tests were successful in identifying potential pitfalls in a telerobotic system, many of which would not have been readily anticipated or incurred through simulation studies. Addressing the total integrated task, instead of bench testing the component parts, forced all aspects of the task to be evaluated. Although the test results indicate that additional developments should be pursued, no problems were encountered that would preclude automated assembly in space as a viable construction method.

  6. STS-109 Crew Training

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Footage shows the crew of STS-109 (Commander Scott Altman, Pilot Duane Carey, Payload Commander John Grunsfeld, and Mission Specialists Nancy Currie, James Newman, Richard Linnehan, and Michael Massimino) during various parts of their training. Scenes show the crew's photo session, Post Landing Egress practice, training in Dome Simulator, Extravehicular Activity Training in the Neutral Buoyancy Laboratory (NBL), and using the Virtual Reality Laboratory Robotic Arm. The crew is also seen tasting food as they choose their menus for on-orbit meals.

  7. Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed

    NASA Technical Reports Server (NTRS)

    Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles

    2016-01-01

    The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management.

  8. 1200737

    NASA Image and Video Library

    2012-08-21

    FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION

  9. 1200739

    NASA Image and Video Library

    2012-08-21

    FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION

  10. 1200738

    NASA Image and Video Library

    2012-08-21

    FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION

  11. Design, Kinematic Optimization, and Evaluation of a Teleoperated System for Middle Ear Microsurgery

    PubMed Central

    Miroir, Mathieu; Nguyen, Yann; Szewczyk, Jérôme; Sterkers, Olivier; Bozorg Grayeli, Alexis

    2012-01-01

    Middle ear surgery involves the smallest and the most fragile bones of the human body. Since microsurgical gestures and a submillimetric precision are required in these procedures, the outcome can be potentially improved by robotic assistance. Today, there is no commercially available device in this field. Here, we describe a method to design a teleoperated assistance robotic system dedicated to the middle ear surgery. Determination of design specifications, the kinematic structure, and its optimization are detailed. The robot-surgeon interface and the command modes are provided. Finally, the system is evaluated by realistic tasks in experimental dedicated settings and in human temporal bone specimens. PMID:22927789

  12. Adjustable impedance, force feedback and command language aids for telerobotics (parts 1-4 of an 8-part MIT progress report)

    NASA Technical Reports Server (NTRS)

    Sheridan, Thomas B.; Raju, G. Jagganath; Buzan, Forrest T.; Yared, Wael; Park, Jong

    1989-01-01

    Projects recently completed or in progress at MIT Man-Machine Systems Laboratory are summarized. (1) A 2-part impedance network model of a single degree of freedom remote manipulation system is presented in which a human operator at the master port interacts with a task object at the slave port in a remote location is presented. (2) The extension of the predictor concept to include force feedback and dynamic modeling of the manipulator and the environment is addressed. (3) A system was constructed to infer intent from the operator's commands and the teleoperation context, and generalize this information to interpret future commands. (4) A command language system is being designed that is robust, easy to learn, and has more natural man-machine communication. A general telerobot problem selected as an important command language context is finding a collision-free path for a robot.

  13. Improved Collision-Detection Method for Robotic Manipulator

    NASA Technical Reports Server (NTRS)

    Leger, Chris

    2003-01-01

    An improved method has been devised for the computational prediction of a collision between (1) a robotic manipulator and (2) another part of the robot or an external object in the vicinity of the robot. The method is intended to be used to test commanded manipulator trajectories in advance so that execution of the commands can be stopped before damage is done. The method involves utilization of both (1) mathematical models of the robot and its environment constructed manually prior to operation and (2) similar models constructed automatically from sensory data acquired during operation. The representation of objects in this method is simpler and more efficient (with respect to both computation time and computer memory), relative to the representations used in most prior methods. The present method was developed especially for use on a robotic land vehicle (rover) equipped with a manipulator arm and a vision system that includes stereoscopic electronic cameras. In this method, objects are represented and collisions detected by use of a previously developed technique known in the art as the method of oriented bounding boxes (OBBs). As the name of this technique indicates, an object is represented approximately, for computational purposes, by a box that encloses its outer boundary. Because many parts of a robotic manipulator are cylindrical, the OBB method has been extended in this method to enable the approximate representation of cylindrical parts by use of octagonal or other multiple-OBB assemblies denoted oriented bounding prisms (OBPs), as in the example of Figure 1. Unlike prior methods, the OBB/OBP method does not require any divisions or transcendental functions; this feature leads to greater robustness and numerical accuracy. The OBB/OBP method was selected for incorporation into the present method because it offers the best compromise between accuracy on the one hand and computational efficiency (and thus computational speed) on the other hand.

  14. Flight telerobotic servicer legacy

    NASA Astrophysics Data System (ADS)

    Shattuck, Paul L.; Lowrie, James W.

    1992-11-01

    The Flight Telerobotic Servicer (FTS) was developed to enhance and provide a safe alternative to human presence in space. The first step for this system was a precursor development test flight (DTF-1) on the Space Shuttle. DTF-1 was to be a pathfinder for manned flight safety of robotic systems. The broad objectives of this mission were three-fold: flight validation of telerobotic manipulator (design, control algorithms, man/machine interfaces, safety); demonstration of dexterous manipulator capabilities on specific building block tasks; and correlation of manipulator performance in space with ground predictions. The DTF-1 system is comprised of a payload bay element (7-DOF manipulator with controllers, end-of-arm gripper and camera, telerobot body with head cameras and electronics module, task panel, and MPESS truss) and an aft flight deck element (force-reflecting hand controller, crew restraint, command and display panel and monitors). The approach used to develop the DTF-1 hardware, software and operations involved flight qualification of components from commercial, military, space, and R controller, end-of-arm tooling, force/torque transducer) and the development of the telerobotic system for space applications. The system is capable of teleoperation and autonomous control (advances state of the art); reliable (two-fault tolerance); and safe (man-rated). Benefits from the development flight included space validation of critical telerobotic technologies and resolution of significant safety issues relating to telerobotic operations in the Shuttle bay or in the vicinity of other space assets. This paper discusses the lessons learned and technology evolution that stemmed from developing and integrating a dexterous robot into a manned system, the Space Shuttle. Particular emphasis is placed on the safety and reliability requirements for a man-rated system as these are the critical factors which drive the overall system architecture. Other topics focused on include: task requirements and operational concepts for servicing and maintenance of space platforms; origins of technology for dexterous robotic systems; issues associated with space qualification of components; and development of the industrial base to support space robotics.

  15. Virtual reality for intelligent and interactive operating, training, and visualization systems

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Schluse, Michael

    2000-10-01

    Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.

  16. KSC-2011-5306

    NASA Image and Video Library

    2011-07-08

    CAPE CANAVERAL, Fla. -- In Firing Room 4 of the Launch Control Center at NASA's Kennedy Space Center in Florida, NASA Administrator Charles Bolden congratulates the launch control team members following the successful launch of space shuttle Atlantis on its STS-135 mission to the International Space Station. Atlantis with its crew of four; Commander Chris Ferguson, Pilot Doug Hurley, Mission Specialists Sandy Magnus and Rex Walheim, lifted off at 11:29 a.m. EDT on July 8, 2011 to deliver the Raffaello multi-purpose logistics module packed with supplies and spare parts for the station. Atlantis also will fly the Robotic Refueling Mission experiment that will investigate the potential for robotically refueling existing satellites in orbit. In addition, Atlantis will return with a failed ammonia pump module to help NASA better understand the failure mechanism and improve pump designs for future systems. STS-135 will be the 33rd flight of Atlantis, the 37th shuttle mission to the space station, and the 135th and final mission of NASA's Space Shuttle Program. For more information, visit www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts135/index.html. Photo credit: NASA/Kim Shiflett

  17. KSC-2011-5305

    NASA Image and Video Library

    2011-07-08

    CAPE CANAVERAL, Fla. -- In Firing Room 4 of the Launch Control Center at NASA's Kennedy Space Center in Florida, Kennedy Center Director Bob Cabana congratulates the launch control team members following the successful launch of space shuttle Atlantis on its STS-135 mission to the International Space Station. Atlantis with its crew of four; Commander Chris Ferguson, Pilot Doug Hurley, Mission Specialists Sandy Magnus and Rex Walheim, lifted off at 11:29 a.m. EDT on July 8, 2011 to deliver the Raffaello multi-purpose logistics module packed with supplies and spare parts for the station. Atlantis also will fly the Robotic Refueling Mission experiment that will investigate the potential for robotically refueling existing satellites in orbit. In addition, Atlantis will return with a failed ammonia pump module to help NASA better understand the failure mechanism and improve pump designs for future systems. STS-135 will be the 33rd flight of Atlantis, the 37th shuttle mission to the space station, and the 135th and final mission of NASA's Space Shuttle Program. For more information, visit www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts135/index.html. Photo credit: NASA/Kim Shiflett

  18. KSC-2011-5296

    NASA Image and Video Library

    2011-07-08

    CAPE CANAVERAL, Fla. -- In Firing Room 4 of the Launch Control Center at NASA's Kennedy Space Center in Florida, Shuttle Launch Director Mike Leinbach adjusts controls at his console during the countdown to the launch of space shuttle Atlantis on its STS-135 mission to the International Space Station. Atlantis with its crew of four; Commander Chris Ferguson, Pilot Doug Hurley, Mission Specialists Sandy Magnus and Rex Walheim, lifted off at 11:29 a.m. EDT on July 8, 2011 to deliver the Raffaello multi-purpose logistics module packed with supplies and spare parts for the station. Atlantis also will fly the Robotic Refueling Mission experiment that will investigate the potential for robotically refueling existing satellites in orbit. In addition, Atlantis will return with a failed ammonia pump module to help NASA better understand the failure mechanism and improve pump designs for future systems. STS-135 will be the 33rd flight of Atlantis, the 37th shuttle mission to the space station, and the 135th and final mission of NASA's Space Shuttle Program. For more information, visit www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts135/index.html. Photo credit: NASA/Kim Shiflett

  19. Automating CapCom Using Mobile Agents and Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Richard L.; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail; Shum, Simon J. Buckingham; Shadbolt, Nigel; hide

    2007-01-01

    Mobile Agents (MA) is an advanced Extra-Vehicular Activity (EVA) communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. MA is voice controlled and provides information verbally to the astronauts through programs called "personal agents." The system partly automates the role of CapCom in Apollo-including monitoring and managing navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. Data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in authentic work contexts, including six years of ethnographic observation of field geology. Analog field experiments in Utah enabled empirically discovering requirements and testing alternative technologies and protocols. We report on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to serve as a relay on the wireless network.

  20. Target Trailing With Safe Navigation for Maritime Autonomous Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Kuwata, Yoshiaki; Zarzhitsky, Dimitri V.

    2013-01-01

    This software implements a motion-planning module for a maritime autonomous surface vehicle (ASV). The module trails a given target while also avoiding static and dynamic surface hazards. When surface hazards are other moving boats, the motion planner must apply International Regulations for Avoiding Collisions at Sea (COLREGS). A key subset of these rules has been implemented in the software. In case contact with the target is lost, the software can receive and follow a "reacquisition route," provided by a complementary system, until the target is reacquired. The programmatic intention is that the trailed target is a submarine, although any mobile naval platform could serve as the target. The algorithmic approach to combining motion with a (possibly moving) goal location, while avoiding local hazards, may be applicable to robotic rovers, automated landing systems, and autonomous airships. The software operates in JPL s CARACaS (Control Architecture for Robotic Agent Command and Sensing) software architecture and relies on other modules for environmental perception data and information on the predicted detectability of the target, as well as the low-level interface to the boat controls.

  1. Autonomous mobile platform for enhanced situational awareness in Mass Casualty Incidents.

    PubMed

    Yang, Dongyi; Schafer, James; Wang, Sili; Ganz, Aura

    2014-01-01

    To enhance the efficiency of the search and rescue process of a Mass Casualty Incident, we introduce a low cost autonomous mobile platform. The mobile platform motion is controlled by an Android Smartphone mounted on a robot. The pictures and video captured by the Smartphone camera can significantly enhance the situational awareness of the incident commander leading to a more efficient search and rescue process. Moreover, the active RFID readers mounted on the mobile platform can improve the localization accuracy of victims in the disaster site in areas where the paramedics are not present, reducing the triage and evacuation time.

  2. Testing command and control of the satellites in formation flight

    NASA Astrophysics Data System (ADS)

    Gheorghe, Popan; Gheorghe, Gh. Ion; Gabriel, Todoran

    2013-10-01

    The topics covered in the paper are mechatronic systems for determining the distance between the satellites and the design of the displacement system on air cushion table for satellites testing. INCDMTM has the capability to approach the collaboration within European Programms (ESA) of human exploration of outer space through mechatronic systems and accessories for telescopes, mechatronics systems used by the launchers, sensors and mechatronic systems for the robotic exploration programs of atmosphere and Mars. This research has a strong development component of industrial competitiveness many of the results of space research have direct applicability in industrial fabrication.

  3. Supervisory Control of Multiple Uninhabited Systems - Methodologies and Enabling Human-Robot Interface Technologies (Commande et surveillance de multiples systemes sans pilote - Methodologies et technologies habilitantes d’interfaces homme-machine)

    DTIC Science & Technology

    2012-12-01

    FRANCE 6.1 DATES SMAART (2006 – 2008) and SUSIE (2009 – 2011). 6.2 LOCATION Brest – Nancy – Paris (France). 6.3 SCENARIO/TASKS The setting...Agency (RTA), a dedicated staff with its headquarters in Neuilly, near Paris , France. In order to facilitate contacts with the military users and...Mission Delay for the Helicopter 8-12 Table 8-2 Assistant Interventions and Commander’s Reactions 8-13 Table 10-1 Partial LOA Matrix as Originally

  4. Cosine Kuramoto Based Distribution of a Convoy with Limit-Cycle Obstacle Avoidance Through the Use of Simulated Agents

    NASA Astrophysics Data System (ADS)

    Howerton, William

    This thesis presents a method for the integration of complex network control algorithms with localized agent specific algorithms for maneuvering and obstacle avoidance. This method allows for successful implementation of group and agent specific behaviors. It has proven to be robust and will work for a variety of vehicle platforms. Initially, a review and implementation of two specific algorithms will be detailed. The first, a modified Kuramoto model was developed by Xu [1] which utilizes tools from graph theory to efficiently perform the task of distributing agents. The second algorithm developed by Kim [2] is an effective method for wheeled robots to avoid local obstacles using a limit-cycle navigation method. The results of implementing these methods on a test-bed of wheeled robots will be presented. Control issues related to outside disturbances not anticipated in the original theory are then discussed. A novel method of using simulated agents to separate the task of distributing agents from agent specific velocity and heading commands has been developed and implemented to address these issues. This new method can be used to combine various behaviors and is not limited to a specific control algorithm.

  5. Human-Centered Design and Evaluation of Haptic Cueing for Teleoperation of Multiple Mobile Robots.

    PubMed

    Son, Hyoung Il; Franchi, Antonio; Chuang, Lewis L; Kim, Junsuk; Bulthoff, Heinrich H; Giordano, Paolo Robuffo

    2013-04-01

    In this paper, we investigate the effect of haptic cueing on a human operator's performance in the field of bilateral teleoperation of multiple mobile robots, particularly multiple unmanned aerial vehicles (UAVs). Two aspects of human performance are deemed important in this area, namely, the maneuverability of mobile robots and the perceptual sensitivity of the remote environment. We introduce metrics that allow us to address these aspects in two psychophysical studies, which are reported here. Three fundamental haptic cue types were evaluated. The Force cue conveys information on the proximity of the commanded trajectory to obstacles in the remote environment. The Velocity cue represents the mismatch between the commanded and actual velocities of the UAVs and can implicitly provide a rich amount of information regarding the actual behavior of the UAVs. Finally, the Velocity+Force cue is a linear combination of the two. Our experimental results show that, while maneuverability is best supported by the Force cue feedback, perceptual sensitivity is best served by the Velocity cue feedback. In addition, we show that large gains in the haptic feedbacks do not always guarantee an enhancement in the teleoperator's performance.

  6. FE Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013708 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  7. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013710 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  8. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013714 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  9. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013712 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  10. The Rise of Robots: The Military’s Use of Autonomous Lethal Force

    DTIC Science & Technology

    2015-02-17

    AIR WAR COLLEGE AIR UNIVERSITY THE RISE OF ROBOTS: THE MILITARY’S USE OF AUTONOMOUS LETHAL FORCE by Christopher J. Spinelli, Lt Col...ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Air War ...Christopher J. Spinelli is currently an Air War College student and was the former Commander of the 445th Flight Test Squadron at Edwards Air Force Base

  11. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079083 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson, Expedition 40 commander, enters data in a computer in preparation for a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  12. Robotics

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.

    2007-01-01

    Lunar robotic functions include: 1. Transport of crew and payloads on the surface of the moon; 2. Offloading payloads from a lunar lander; 3. Handling the deployment of surface systems; with 4. Human commanding of these functions from inside a lunar vehicle, habitat, or extravehicular (space walk), with Earth-based supervision. The systems that will perform these functions may not look like robots from science fiction. In fact, robotic functions may be automated trucks, cranes and winches. Use of this equipment prior to the crew s arrival or in the potentially long periods without crews on the surface, will require that these systems be computer controlled machines. The public release of NASA's Exploration plans at the 2nd Space Exploration Conference (Houston, December 2006) included a lunar outpost with as many as four unique mobility chassis designs. The sequence of lander offloading tasks involved as many as ten payloads, each with a unique set of geometry, mass and interface requirements. This plan was refined during a second phase study concluded in August 2007. Among the many improvements to the exploration plan were a reduction in the number of unique mobility chassis designs and a reduction in unique payload specifications. As the lunar surface system payloads have matured, so have the mobility and offloading functional requirements. While the architecture work continues, the community can expect to see functional requirements in the areas of surface mobility, surface handling, and human-systems interaction as follows: Surface Mobility 1. Transport crew on the lunar surface, accelerating construction tasks, expanding the crew s sphere of influence for scientific exploration, and providing a rapid return to an ascent module in an emergency. The crew transport can be with an un-pressurized rover, a small pressurized rover, or a larger mobile habitat. 2. Transport Extra-Vehicular Activity (EVA) equipment and construction payloads. 3. Transport habitats and power modules over long distances, pre-positioning them for the arrival of crew on a subsequent lander. Surface Handling 1. Offload surface system payloads from the lander, breaking launch restraints and power/data connections. Payloads may be offloaded to a wheeled vehicle for transport. 2. Deploy payloads from a wheeled vehicle at a field site, placing the payloads in their final use site on the ground or mating them with existing surface systems. 3. Support regolith collection, site preparation, berm construction, or other civil engineering tasks using tools and implements attached to rovers. Human-Systems Interaction 1. Provide a safe command and control interface for suited EVA to ride on and drive the vehicles, making sure that the systems are also safe for working near dismounted crew. 2. Provide an effective control system for IV crew to tele-operate vehicles, cranes and other equipment from inside the surface habitats with evolving independence from Earth. .. Provide a supervisory system that allows machines to be commanded from the ground, working across the Earth-Lunar time delays on the order of 5-10 seconds (round trip) to support operations when crew are not resident on the surface. Technology Development Needs 1. Surface vehicles that can dock, align and mate with outpost equipment such as landers, habitats and fluid/power interfaces. 2. Long life motors, drive trains, seals, motor electronics, sensors, processors, cable harnesses, and dash board displays. 3. Active suspension control, localization, high speed obstacle avoidance, and safety systems for operating near dismounted crew. 4. High specific energy and specific power batteries that are safe, rechargeable, and long lived.

  13. Executive system software design and expert system implementation

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1992-01-01

    The topics are presented in viewgraph form and include: software requirements; design layout of the automated assembly system; menu display for automated composite command; expert system features; complete robot arm state diagram and logic; and expert system benefits.

  14. TARDEC's Intelligent Ground Systems overview

    NASA Astrophysics Data System (ADS)

    Jaster, Jeffrey F.

    2009-05-01

    The mission of the Intelligent Ground Systems (IGS) Area at the Tank Automotive Research, Development and Engineering Center (TARDEC) is to conduct technology maturation and integration to increase Soldier robot control/interface intuitiveness and robotic ground system robustness, functionality and overall system effectiveness for the Future Combat System Brigade Combat Team, Robotics Systems Joint Project Office and game changing capabilities to be fielded beyond the current force. This is accomplished through technology component development focused on increasing unmanned ground vehicle autonomy, optimizing crew interfaces and mission planners that capture commanders' intent, integrating payloads that provide 360 degree local situational awareness and expanding current UGV tactical behavior, learning and adaptation capabilities. The integration of these technology components into ground vehicle demonstrators permits engineering evaluation, User assessment and performance characterization in increasingly complex, dynamic and relevant environments to include high speed on road or cross country operations, all weather/visibility conditions and military operations in urban terrain (MOUT). Focused testing and experimentation is directed at reducing PM risk areas (safe operations, autonomous maneuver, manned-unmanned collaboration) and transitioning technology in the form of hardware, software algorithms, test and performance data, as well as User feedback and lessons learned.

  15. Learning visuomotor transformations for gaze-control and grasping.

    PubMed

    Hoffmann, Heiko; Schenck, Wolfram; Möller, Ralf

    2005-08-01

    For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.

  16. Neural architectures for robot intelligence.

    PubMed

    Ritter, H; Steil, J J; Nölker, C; Röthling, F; McGuire, P

    2003-01-01

    We argue that direct experimental approaches to elucidate the architecture of higher brains may benefit from insights gained from exploring the possibilities and limits of artificial control architectures for robot systems. We present some of our recent work that has been motivated by that view and that is centered around the study of various aspects of hand actions since these are intimately linked with many higher cognitive abilities. As examples, we report on the development of a modular system for the recognition of continuous hand postures based on neural nets, the use of vision and tactile sensing for guiding prehensile movements of a multifingered hand, and the recognition and use of hand gestures for robot teaching. Regarding the issue of learning, we propose to view real-world learning from the perspective of data-mining and to focus more strongly on the imitation of observed actions instead of purely reinforcement-based exploration. As a concrete example of such an effort we report on the status of an ongoing project in our laboratory in which a robot equipped with an attention system with a neurally inspired architecture is taught actions by using hand gestures in conjunction with speech commands. We point out some of the lessons learnt from this system, and discuss how systems of this kind can contribute to the study of issues at the junction between natural and artificial cognitive systems.

  17. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.

  18. Interactive multi-objective path planning through a palette-based user interface

    NASA Astrophysics Data System (ADS)

    Shaikh, Meher T.; Goodrich, Michael A.; Yi, Daqing; Hoehne, Joseph

    2016-05-01

    n a problem where a human uses supervisory control to manage robot path-planning, there are times when human does the path planning, and if satisfied commits those paths to be executed by the robot, and the robot executes that plan. In planning a path, the robot often uses an optimization algorithm that maximizes or minimizes an objective. When a human is assigned the task of path planning for robot, the human may care about multiple objectives. This work proposes a graphical user interface (GUI) designed for interactive robot path-planning when an operator may prefer one objective over others or care about how multiple objectives are traded off. The GUI represents multiple objectives using the metaphor of an artist's palette. A distinct color is used to represent each objective, and tradeoffs among objectives are balanced in a manner that an artist mixes colors to get the desired shade of color. Thus, human intent is analogous to the artist's shade of color. We call the GUI an "Adverb Palette" where the word "Adverb" represents a specific type of objective for the path, such as the adverbs "quickly" and "safely" in the commands: "travel the path quickly", "make the journey safely". The novel interactive interface provides the user an opportunity to evaluate various alternatives (that tradeoff between different objectives) by allowing her to visualize the instantaneous outcomes that result from her actions on the interface. In addition to assisting analysis of various solutions given by an optimization algorithm, the palette has additional feature of allowing the user to define and visualize her own paths, by means of waypoints (guiding locations) thereby spanning variety for planning. The goal of the Adverb Palette is thus to provide a way for the user and robot to find an acceptable solution even though they use very different representations of the problem. Subjective evaluations suggest that even non-experts in robotics can carry out the planning tasks with a great deal of flexibility using the adverb palette.

  19. iss055e010992

    NASA Image and Video Library

    2018-04-04

    iss055e010992 (April 5, 2018) --- The SpaceX Dragon resupply ship is pictured just moments after Japan Aerospace Exploration Agency astronaut Norishige Kanai commanded the 57.7-foot-long Canadarm2 robotic arm to reach out and capture the commercial space freighter.

  20. A bi-hemispheric neuronal network model of the cerebellum with spontaneous climbing fiber firing produces asymmetrical motor learning during robot control.

    PubMed

    Pinzon-Morales, Ruben-Dario; Hirata, Yutaka

    2014-01-01

    To acquire and maintain precise movement controls over a lifespan, changes in the physical and physiological characteristics of muscles must be compensated for adaptively. The cerebellum plays a crucial role in such adaptation. Changes in muscle characteristics are not always symmetrical. For example, it is unlikely that muscles that bend and straighten a joint will change to the same degree. Thus, different (i.e., asymmetrical) adaptation is required for bending and straightening motions. To date, little is known about the role of the cerebellum in asymmetrical adaptation. Here, we investigate the cerebellar mechanisms required for asymmetrical adaptation using a bi-hemispheric cerebellar neuronal network model (biCNN). The bi-hemispheric structure is inspired by the observation that lesioning one hemisphere reduces motor performance asymmetrically. The biCNN model was constructed to run in real-time and used to control an unstable two-wheeled balancing robot. The load of the robot and its environment were modified to create asymmetrical perturbations. Plasticity at parallel fiber-Purkinje cell synapses in the biCNN model was driven by error signal in the climbing fiber (cf) input. This cf input was configured to increase and decrease its firing rate from its spontaneous firing rate (approximately 1 Hz) with sensory errors in the preferred and non-preferred direction of each hemisphere, as demonstrated in the monkey cerebellum. Our results showed that asymmetrical conditions were successfully handled by the biCNN model, in contrast to a single hemisphere model or a classical non-adaptive proportional and derivative controller. Further, the spontaneous activity of the cf, while relatively small, was critical for balancing the contribution of each cerebellar hemisphere to the overall motor command sent to the robot. Eliminating the spontaneous activity compromised the asymmetrical learning capabilities of the biCNN model. Thus, we conclude that a bi-hemispheric structure and adequate spontaneous activity of cf inputs are critical for cerebellar asymmetrical motor learning.

  1. A bi-hemispheric neuronal network model of the cerebellum with spontaneous climbing fiber firing produces asymmetrical motor learning during robot control

    PubMed Central

    Pinzon-Morales, Ruben-Dario; Hirata, Yutaka

    2014-01-01

    To acquire and maintain precise movement controls over a lifespan, changes in the physical and physiological characteristics of muscles must be compensated for adaptively. The cerebellum plays a crucial role in such adaptation. Changes in muscle characteristics are not always symmetrical. For example, it is unlikely that muscles that bend and straighten a joint will change to the same degree. Thus, different (i.e., asymmetrical) adaptation is required for bending and straightening motions. To date, little is known about the role of the cerebellum in asymmetrical adaptation. Here, we investigate the cerebellar mechanisms required for asymmetrical adaptation using a bi-hemispheric cerebellar neuronal network model (biCNN). The bi-hemispheric structure is inspired by the observation that lesioning one hemisphere reduces motor performance asymmetrically. The biCNN model was constructed to run in real-time and used to control an unstable two-wheeled balancing robot. The load of the robot and its environment were modified to create asymmetrical perturbations. Plasticity at parallel fiber-Purkinje cell synapses in the biCNN model was driven by error signal in the climbing fiber (cf) input. This cf input was configured to increase and decrease its firing rate from its spontaneous firing rate (approximately 1 Hz) with sensory errors in the preferred and non-preferred direction of each hemisphere, as demonstrated in the monkey cerebellum. Our results showed that asymmetrical conditions were successfully handled by the biCNN model, in contrast to a single hemisphere model or a classical non-adaptive proportional and derivative controller. Further, the spontaneous activity of the cf, while relatively small, was critical for balancing the contribution of each cerebellar hemisphere to the overall motor command sent to the robot. Eliminating the spontaneous activity compromised the asymmetrical learning capabilities of the biCNN model. Thus, we conclude that a bi-hemispheric structure and adequate spontaneous activity of cf inputs are critical for cerebellar asymmetrical motor learning. PMID:25414644

  2. ARC-2006-ACD06-0113-012

    NASA Image and Video Library

    2006-06-28

    Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. On the Ames end we find the Girl Scouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. On the Ames end we find the Girl Csouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. see full text on the NASA-Ames News - Research # 04-91AR Center Director works with 'SpaceCookie' sending commands to Zoe.

  3. ARC-2006-ACD06-0113-015

    NASA Image and Video Library

    2006-06-28

    Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. On the Ames end we find the Girl Scouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. On the Ames end we find the Girl Csouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. see full text on the NASA-Ames News - Research # 04-91AR Center Director works with 'SpaceCookie' sending commands to Zoe.

  4. ARC-2006-ACD06-0113-014

    NASA Image and Video Library

    2006-07-05

    Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. On the Ames end we find the Girl Scouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. On the Ames end we find the Girl Csouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. see full text on the NASA-Ames News - Research # 04-91AR Center Director works with 'SpaceCookie' sending commands to Zoe.

  5. ARC-2006-ACD06-0113-013

    NASA Image and Video Library

    2006-06-28

    Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. On the Ames end we find the Girl Scouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. On the Ames end we find the Girl Csouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. see full text on the NASA-Ames News - Research # 04-91AR Center Director works with 'SpaceCookie' sending commands to Zoe.

  6. Restoring voluntary control of locomotion after paralyzing spinal cord injury.

    PubMed

    van den Brand, Rubia; Heutschi, Janine; Barraud, Quentin; DiGiovanna, Jack; Bartholdi, Kay; Huerlimann, Michèle; Friedli, Lucia; Vollenweider, Isabel; Moraud, Eduardo Martin; Duis, Simone; Dominici, Nadia; Micera, Silvestro; Musienko, Pavel; Courtine, Grégoire

    2012-06-01

    Half of human spinal cord injuries lead to chronic paralysis. Here, we introduce an electrochemical neuroprosthesis and a robotic postural interface designed to encourage supraspinally mediated movements in rats with paralyzing lesions. Despite the interruption of direct supraspinal pathways, the cortex regained the capacity to transform contextual information into task-specific commands to execute refined locomotion. This recovery relied on the extensive remodeling of cortical projections, including the formation of brainstem and intraspinal relays that restored qualitative control over electrochemically enabled lumbosacral circuitries. Automated treadmill-restricted training, which did not engage cortical neurons, failed to promote translesional plasticity and recovery. By encouraging active participation under functional states, our training paradigm triggered a cortex-dependent recovery that may improve function after similar injuries in humans.

  7. STS-98 U.S. Lab Destiny rests in Atlantis' payload bay

    NASA Technical Reports Server (NTRS)

    2001-01-01

    KENNEDY SPACE CENTER, Fla. -- This closeup reveals the tight clearance between an elbow camera on the robotic arm (left) and the U.S. Lab Destiny when the payload bay doors are closed. Measurements of the elbow camera revealed only a one-inch clearance from the U.S. Lab payload, which is under review. A key element in the construction of the International Space Station, Destiny is 28 feet long and weighs 16 tons. Destiny will be attached to the Unity node on the ISS using the Shuttle'''s robot arm, with the help of the camera. This research and command-and-control center is the most sophisticated and versatile space laboratory ever built. It will ultimately house a total of 23 experiment racks for crew support and scientific research. Destiny will fly on STS-98, the seventh construction flight to the ISS. Launch of STS-98 is scheduled for Jan. 19 at 2:11 a.m. EST.

  8. Spectrally queued feature selection for robotic visual odometery

    NASA Astrophysics Data System (ADS)

    Pirozzo, David M.; Frederick, Philip A.; Hunt, Shawn; Theisen, Bernard; Del Rose, Mike

    2011-01-01

    Over the last two decades, research in Unmanned Vehicles (UV) has rapidly progressed and become more influenced by the field of biological sciences. Researchers have been investigating mechanical aspects of varying species to improve UV air and ground intrinsic mobility, they have been exploring the computational aspects of the brain for the development of pattern recognition and decision algorithms and they have been exploring perception capabilities of numerous animals and insects. This paper describes a 3 month exploratory applied research effort performed at the US ARMY Research, Development and Engineering Command's (RDECOM) Tank Automotive Research, Development and Engineering Center (TARDEC) in the area of biologically inspired spectrally augmented feature selection for robotic visual odometry. The motivation for this applied research was to develop a feasibility analysis on multi-spectrally queued feature selection, with improved temporal stability, for the purposes of visual odometry. The intended application is future semi-autonomous Unmanned Ground Vehicle (UGV) control as the richness of data sets required to enable human like behavior in these systems has yet to be defined.

  9. STS-106 Crew Activities Report/Flight Day 04 Highlights

    NASA Technical Reports Server (NTRS)

    2000-01-01

    On this fourth day of the STS-106 Atlantis mission, the flight crew, Commander Commander Terrence W. Wilcutt, Pilot Scott D. Altman, and Mission Specialists Daniel C. Burbank, Edward T. Lu, Richard A. Mastracchio, Yuri Ivanovich Malenchenko, and Boris V. Morukov are seen preparing for the scheduled space walk. Lu and Malenchenko are seen coming through the hatch of the International Space Station (ISS). Also shown are Lu and Malenchenko attaching a magnetometer and boom to Zvezda. Mastracchio operates the robot arm moving the extravehicular activity (EVA) crew outside of the ISS.

  10. U.S. Commercial Cargo Spacecraft Departs International Space Station

    NASA Image and Video Library

    2018-01-13

    After spending a month at the International Space Station and delivering several tons of supplies and scientific experiments, the SpaceX Dragon cargo craft departed Jan. 13, headed for a parachute-assisted splashdown in the Pacific Ocean southwest of Long Beach, California. Ground controllers at NASA’s Johnson Space Center in Houston sent commands to release Dragon from the Canadarm2 robotic arm while Expedition 54 Flight Engineers Joe Acaba and Scott Tingle of NASA monitored the activity from the station’s cupola. Loaded with scientific samples and other cargo, Dragon was scheduled to conduct a deorbit burn a few hours after its release for its descent back to Earth.

  11. Multi-Robot Interfaces and Operator Situational Awareness: Study of the Impact of Immersion and Prediction

    PubMed Central

    Peña-Tapia, Elena; Martín-Barrio, Andrés; Olivares-Méndez, Miguel A.

    2017-01-01

    Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation. PMID:28749407

  12. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079355 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (foreground), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  13. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079129 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (left), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  14. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079910 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (left), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  15. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079332 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (foreground), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  16. Plan recognition and generalization in command languages with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Yared, Wael I.; Sheridan, Thomas B.

    1991-01-01

    A method for pragmatic inference as a necessary accompaniment to command languages is proposed. The approach taken focuses on the modeling and recognition of the human operator's intent, which relates sequences of domain actions ('plans') to changes in some model of the task environment. The salient feature of this module is that it captures some of the physical and linguistic contextual aspects of an instruction. This provides a basis for generalization and reinterpretation of the instruction in different task environments. The theoretical development is founded on previous work in computational linguistics and some recent models in the theory of action and intention. To illustrate these ideas, an experimental command language to a telerobot is implemented. The program consists of three different components: a robot graphic simulation, the command language itself, and the domain-independent pragmatic inference module. Examples of task instruction processes are provided to demonstrate the benefits of this approach.

  17. Impacts of Advanced Manufacturing Technology on Parametric Estimating

    DTIC Science & Technology

    1989-12-01

    been build ( Blois , p. 65). As firms move up the levels of automation, there is a large capital investment to acquire robots, computer numerically...Affordable Acquisition Approach Study, Executive Summary, Air Force Systems Command, Andrews AFB, Maryland, February 9, 1983. Blois , K.J., "Manufacturing

  18. Development of coffee maker service robot using speech and face recognition systems using POMDP

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Meiliana; Santoso Gunawan, Alexander Agung

    2016-07-01

    There are many development of intelligent service robot in order to interact with user naturally. This purpose can be done by embedding speech and face recognition ability on specific tasks to the robot. In this research, we would like to propose Intelligent Coffee Maker Robot which the speech recognition is based on Indonesian language and powered by statistical dialogue systems. This kind of robot can be used in the office, supermarket or restaurant. In our scenario, robot will recognize user's face and then accept commands from the user to do an action, specifically in making a coffee. Based on our previous work, the accuracy for speech recognition is about 86% and face recognition is about 93% in laboratory experiments. The main problem in here is to know the intention of user about how sweetness of the coffee. The intelligent coffee maker robot should conclude the user intention through conversation under unreliable automatic speech in noisy environment. In this paper, this spoken dialog problem is treated as a partially observable Markov decision process (POMDP). We describe how this formulation establish a promising framework by empirical results. The dialog simulations are presented which demonstrate significant quantitative outcome.

  19. Robotic follower experimentation results: ready for FCS increment I

    NASA Astrophysics Data System (ADS)

    Jaczkowski, Jeffrey J.

    2003-09-01

    Robotics is a fundamental enabling technology required to meet the U.S. Army's vision to be a strategically responsive force capable of domination across the entire spectrum of conflict. The U. S. Army Research, Development and Engineering Command (RDECOM) Tank Automotive Research, Development & Engineering Center (TARDEC), in partnership with the U.S. Army Research Laboratory, is developing a leader-follower capability for Future Combat Systems. The Robotic Follower Advanced Technology Demonstration (ATD) utilizes a manned leader to provide a highlevel proofing of the follower's path, which operates with minimal user intervention. This paper will give a programmatic overview and discuss both the technical approach and operational experimentation results obtained during testing conducted at Ft. Bliss, New Mexico in February-March 2003.

  20. Evaluation of inertial devices for the control of large, flexible, space-based telerobotic arms

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C.; Kenny, Sean P.; Ghosh, Dave; Shenhar, Joram

    1993-01-01

    Inertial devices, including sensors and actuators, offer the potential of improving the tracking of telerobotic commands for space-based robots by smoothing payload motions and suppressing vibrations. In this paper, inertial actuators (specifically, torque-wheels and reaction-masses) are studied for that potential application. Batch simulation studies are presented which show that torque-wheels can reduce the overshoot in abrupt stop commands by 82 percent for a two-link arm. For man-in-the-loop evaluation, a real-time simulator has been developed which samples a hand-controller, solves the nonlinear equations of motion, and graphically displays the resulting motion on a computer workstation. Currently, two manipulator models, a two-link, rigid arm and a single-link, flexible arm, have been studied. Results are presented which show that, for a single-link arm, a reaction-mass/torque-wheel combination at the payload end can yield a settling time of 3 s for disturbances in the first flexible mode as opposed to 10 s using only a hub motor. A hardware apparatus, which consists of a single-link, highly flexible arm with a hub motor and a torque-wheel, has been assembled to evaluate the concept and is described herein.

  1. Evaluation of inertial devices for the control of large, flexible, space-based telerobotic arms

    NASA Astrophysics Data System (ADS)

    Montgomery, Raymond C.; Kenny, Sean P.; Ghosh, Dave; Shenhar, Joram

    1993-02-01

    Inertial devices, including sensors and actuators, offer the potential of improving the tracking of telerobotic commands for space-based robots by smoothing payload motions and suppressing vibrations. In this paper, inertial actuators (specifically, torque-wheels and reaction-masses) are studied for that potential application. Batch simulation studies are presented which show that torque-wheels can reduce the overshoot in abrupt stop commands by 82 percent for a two-link arm. For man-in-the-loop evaluation, a real-time simulator has been developed which samples a hand-controller, solves the nonlinear equations of motion, and graphically displays the resulting motion on a computer workstation. Currently, two manipulator models, a two-link, rigid arm and a single-link, flexible arm, have been studied. Results are presented which show that, for a single-link arm, a reaction-mass/torque-wheel combination at the payload end can yield a settling time of 3 s for disturbances in the first flexible mode as opposed to 10 s using only a hub motor. A hardware apparatus, which consists of a single-link, highly flexible arm with a hub motor and a torque-wheel, has been assembled to evaluate the concept and is described herein.

  2. IVA the robot: Design guidelines and lessons learned from the first space station laboratory manipulation system

    NASA Technical Reports Server (NTRS)

    Konkel, Carl R.; Powers, Allen K.; Dewitt, J. Russell

    1991-01-01

    The first interactive Space Station Freedom (SSF) lab robot exhibit was installed at the Space and Rocket Center in Huntsville, AL, and has been running daily since. IntraVehicular Activity (IVA) the robot is mounted in a full scale U.S. Lab (USL) mockup to educate the public on possible automation and robotic applications aboard the SSF. Responding to audio and video instructions at the Command Console, exhibit patrons may prompt IVA to perform a housekeeping task or give a speaking tour of the module. Other exemplary space station tasks are simulated and the public can even challenge IVA to a game of tic tac toe. In anticipation of such a system being built for the Space Station, a discussion is provided of the approach taken, along with suggestions for applicability to the Space Station Environment.

  3. Resource allocation and supervisory control architecture for intelligent behavior generation

    NASA Astrophysics Data System (ADS)

    Shah, Hitesh K.; Bahl, Vikas; Moore, Kevin L.; Flann, Nicholas S.; Martin, Jason

    2003-09-01

    In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) was funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). As part of our research, we presented the use of a grammar-based approach to enabling intelligent behaviors in autonomous robotic vehicles. With the growth of the number of available resources on the robot, the variety of the generated behaviors and the need for parallel execution of multiple behaviors to achieve reaction also grew. As continuation of our past efforts, in this paper, we discuss the parallel execution of behaviors and the management of utilized resources. In our approach, available resources are wrapped with a layer (termed services) that synchronizes and serializes access to the underlying resources. The controlling agents (called behavior generating agents) generate behaviors to be executed via these services. The agents are prioritized and then, based on their priority and the availability of requested services, the Control Supervisor decides on a winner for the grant of access to services. Though the architecture is applicable to a variety of autonomous vehicles, we discuss its application on T4, a mid-sized autonomous vehicle developed for security applications.

  4. KSC-2011-5302

    NASA Image and Video Library

    2011-07-08

    CAPE CANAVERAL, Fla. -- In Firing Room 4 of the Launch Control Center at NASA's Kennedy Space Center in Florida, Shuttle Launch Director Mike Leinbach, and Payloads Launch Manager and Deputy Director of ISS and Spacecraft Processing at Kennedy, Bill Dowdell along with the launch control members, watch intently as space shuttle Atlantis lifts off on its STS-135 mission to the International Space Station. Atlantis with its crew of four; Commander Chris Ferguson, Pilot Doug Hurley, Mission Specialists Sandy Magnus and Rex Walheim, lifted off at 11:29 a.m. EDT on July 8, 2011 to deliver the Raffaello multi-purpose logistics module packed with supplies and spare parts for the station. Atlantis also will fly the Robotic Refueling Mission experiment that will investigate the potential for robotically refueling existing satellites in orbit. In addition, Atlantis will return with a failed ammonia pump module to help NASA better understand the failure mechanism and improve pump designs for future systems. STS-135 will be the 33rd flight of Atlantis, the 37th shuttle mission to the space station, and the 135th and final mission of NASA's Space Shuttle Program. For more information, visit www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts135/index.html. Photo credit: NASA/Kim Shiflett

  5. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    PubMed Central

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O.; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M.; Vitiello, Nicola

    2016-01-01

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements. PMID:26861333

  6. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation.

    PubMed

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M; Vitiello, Nicola

    2016-02-05

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master-slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator's hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers' hands movements.

  7. Intelligent Autonomy for Unmanned Surface and Underwater Vehicles

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Woodward, Gail

    2011-01-01

    As the Autonomous Underwater Vehicle (AUV) and Autonomous Surface Vehicle (ASV) platforms mature in endurance and reliability, a natural evolution will occur towards longer, more remote autonomous missions. This evolution will require the development of key capabilities that allow these robotic systems to perform a high level of on-board decisionmaking, which would otherwise be performed by humanoperators. With more decision making capabilities, less a priori knowledge of the area of operations would be required, as these systems would be able to sense and adapt to changing environmental conditions, such as unknown topography, currents, obstructions, bays, harbors, islands, and river channels. Existing vehicle sensors would be dual-use; that is they would be utilized for the primary mission, which may be mapping or hydrographic reconnaissance; as well as for autonomous hazard avoidance, route planning, and bathymetric-based navigation. This paper describes a tightly integrated instantiation of an autonomous agent called CARACaS (Control Architecture for Robotic Agent Command and Sensing) developed at JPL (Jet Propulsion Laboratory) that was designed to address many of the issues for survivable ASV/AUV control and to provide adaptive mission capabilities. The results of some on-water tests with US Navy technology test platforms are also presented.

  8. BGen: A UML Behavior Network Generator Tool

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Reder, Leonard J.; Balian, Harry

    2010-01-01

    BGen software was designed for autogeneration of code based on a graphical representation of a behavior network used for controlling automatic vehicles. A common format used for describing a behavior network, such as that used in the JPL-developed behavior-based control system, CARACaS ["Control Architecture for Robotic Agent Command and Sensing" (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40] includes a graph with sensory inputs flowing through the behaviors in order to generate the signals for the actuators that drive and steer the vehicle. A computer program to translate Unified Modeling Language (UML) Freeform Implementation Diagrams into a legacy C implementation of Behavior Network has been developed in order to simplify the development of C-code for behavior-based control systems. UML is a popular standard developed by the Object Management Group (OMG) to model software architectures graphically. The C implementation of a Behavior Network is functioning as a decision tree.

  9. Intelligent viewing control for robotic and automation systems

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  10. Intuitive wireless control of a robotic arm for people living with an upper body disability.

    PubMed

    Fall, C L; Turgeon, P; Campeau-Lecours, A; Maheu, V; Boukadoum, M; Roy, S; Massicotte, D; Gosselin, C; Gosselin, B

    2015-08-01

    Assistive Technologies (ATs) also called extrinsic enablers are useful tools for people living with various disabilities. The key points when designing such useful devices not only concern their intended goal, but also the most suitable human-machine interface (HMI) that should be provided to users. This paper describes the design of a highly intuitive wireless controller for people living with upper body disabilities with a residual or complete control of their neck and their shoulders. Tested with JACO, a six-degree-of-freedom (6-DOF) assistive robotic arm with 3 flexible fingers on its end-effector, the system described in this article is made of low-cost commercial off-the-shelf components and allows a full emulation of JACO's standard controller, a 3 axis joystick with 7 user buttons. To do so, three nine-degree-of-freedom (9-DOF) inertial measurement units (IMUs) are connected to a microcontroller and help measuring the user's head and shoulders position, using a complementary filter approach. The results are then transmitted to a base-station via a 2.4-GHz low-power wireless transceiver and interpreted by the control algorithm running on a PC host. A dedicated software interface allows the user to quickly calibrate the controller, and translates the information into suitable commands for JACO. The proposed controller is thoroughly described, from the electronic design to implemented algorithms and user interfaces. Its performance and future improvements are discussed as well.

  11. State-space control of prosthetic hand shape.

    PubMed

    Velliste, M; McMorland, A J C; Diril, E; Clanton, S T; Schwartz, A B

    2012-01-01

    In the field of neuroprosthetic control, there is an emerging need for simplified control of high-dimensional devices. Advances in robotic technology have led to the development of prosthetic arms that now approach the look and number of degrees of freedom (DoF) of a natural arm. These arms, and especially hands, now have more controllable DoFs than the number of control DoFs available in many applications. In natural movements, high correlations exist between multiple joints, such as finger flexions. Therefore, discrepancy between the number of control and effector DoFs can be overcome by a control scheme that maps low-DoF control space to high-DoF joint space. Imperfect effectors, sensor noise and interactions with external objects require the use of feedback controllers. The incorporation of feedback in a system where the command is in a different space, however, is challenging, requiring a potentially difficult inverse high-DoF to low-DoF transformation. Here we present a solution to this problem based on the Extended Kalman Filter.

  12. Design and validation of an MR-conditional robot for transcranial focused ultrasound surgery in infants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Karl D., E-mail: karl.price@sickkids.ca

    Purpose: Current treatment of intraventricular hemorrhage (IVH) involves cerebral shunt placement or an invasive brain surgery. Magnetic resonance-guided focused ultrasound (MRgFUS) applied to the brains of pediatric patients presents an opportunity to treat IVH in a noninvasive manner, termed “incision-less surgery.” Current clinical and research focused ultrasound systems lack the capability to perform neonatal transcranial surgeries due to either range of motion or dexterity requirements. A novel robotic system is proposed to position a focused ultrasound transducer accurately above the head of a neonatal patient inside an MRI machine to deliver the therapy. Methods: A clinical Philips Sonalleve MRgFUS systemmore » was expanded to perform transcranial treatment. A five degree-of-freedom MR-conditional robot was designed and manufactured using MR compatible materials. The robot electronics and control were integrated into existing Philips electronics and software interfaces. The user commands the position of the robot with a graphical user interface, and is presented with real-time MR imaging of the patient throughout the surgery. The robot is validated through a series of experiments that characterize accuracy, signal-to-noise ratio degeneration of an MR image as a result of the robot, MR imaging artifacts generated by the robot, and the robot’s ability to operate in a representative surgical environment inside an MR machine. Results: Experimental results show the robot responds reliably within an MR environment, has achieved 0.59 ± 0.25 mm accuracy, does not produce severe MR-imaging artifacts, has a workspace providing sufficient coverage of a neonatal brain, and can manipulate a 5 kg payload. A full system demonstration shows these characteristics apply in an application environment. Conclusions: This paper presents a comprehensive look at the process of designing and validating a new robot from concept to implementation for use in an MR environment. An MR conditional robot has been designed and manufactured to design specifications. The system has demonstrated its feasibility as a platform for MRgFUS interventions for neonatal patients. The success of the system in experimental trials suggests that it is ready to be used for validation of the transcranial intervention in animal studies.« less

  13. Whole-arm tactile sensing for beneficial and acceptable contact during robotic assistance.

    PubMed

    Grice, Phillip M; Killpack, Marc D; Jain, Advait; Vaish, Sarvagya; Hawke, Jeffrey; Kemp, Charles C

    2013-06-01

    Many assistive tasks involve manipulation near the care-receiver's body, including self-care tasks such as dressing, feeding, and personal hygiene. A robot can provide assistance with these tasks by moving its end effector to poses near the care-receiver's body. However, perceiving and maneuvering around the care-receiver's body can be challenging due to a variety of issues, including convoluted geometry, compliant materials, body motion, hidden surfaces, and the object upon which the body is resting (e.g., a wheelchair or bed). Using geometric simulations, we first show that an assistive robot can achieve a much larger percentage of end-effector poses near the care-receiver's body if its arm is allowed to make contact. Second, we present a novel system with a custom controller and whole-arm tactile sensor array that enables a Willow Garage PR2 to regulate contact forces across its entire arm while moving its end effector to a commanded pose. We then describe tests with two people with motor impairments, one of whom used the system to grasp and pull a blanket over himself and to grab a cloth and wipe his face, all while in bed at his home. Finally, we describe a study with eight able-bodied users in which they used the system to place objects near their bodies. On average, users perceived the system to be safe and comfortable, even though substantial contact occurred between the robot's arm and the user's body.

  14. Development of a vision non-contact sensing system for telerobotic applications

    NASA Astrophysics Data System (ADS)

    Karkoub, M.; Her, M.-G.; Ho, M.-I.; Huang, C.-C.

    2013-08-01

    The study presented here describes a novel vision-based motion detection system for telerobotic operations such as distant surgical procedures. The system uses a CCD camera and image processing to detect the motion of a master robot or operator. Colour tags are placed on the arm and head of a human operator to detect the up/down, right/left motion of the head as well as the right/left motion of the arm. The motion of the colour tags are used to actuate a slave robot or a remote system. The determination of the colour tags' motion is achieved through image processing using eigenvectors and colour system morphology and the relative head, shoulder and wrist rotation angles through inverse dynamics and coordinate transformation. A program is used to transform this motion data into motor control commands and transmit them to a slave robot or remote system through wireless internet. The system performed well even in complex environments with errors that did not exceed 2 pixels with a response time of about 0.1 s. The results of the experiments are available at: http://www.youtube.com/watch?v=yFxLaVWE3f8 and http://www.youtube.com/watch?v=_nvRcOzlWHw

  15. Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie

    2003-01-01

    A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.

  16. Generalization in Adaptation to Stable and Unstable Dynamics

    PubMed Central

    Kadiallah, Abdelhamid; Franklin, David W.; Burdet, Etienne

    2012-01-01

    Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization. PMID:23056191

  17. NASREN: Standard reference model for telerobot control

    NASA Technical Reports Server (NTRS)

    Albus, J. S.; Lumia, R.; Mccain, H.

    1987-01-01

    A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.

  18. On the role of exchange of power and information signals in control and stability of the human-robot interaction

    NASA Technical Reports Server (NTRS)

    Kazerooni, H.

    1991-01-01

    A human's ability to perform physical tasks is limited, not only by his intelligence, but by his physical strength. If, in an appropriate environment, a machine's mechanical power is closely integrated with a human arm's mechanical power under the control of the human intellect, the resulting system will be superior to a loosely integrated combination of a human and a fully automated robot. Therefore, we must develop a fundamental solution to the problem of 'extending' human mechanical power. The work presented here defines 'extenders' as a class of robot manipulators worn by humans to increase human mechanical strength, while the wearer's intellect remains the central control system for manipulating the extender. The human, in physical contact with the extender, exchanges power and information signals with the extender. The aim is to determine the fundamental building blocks of an intelligent controller, a controller which allows interaction between humans and a broad class of computer-controlled machines via simultaneous exchange of both power and information signals. The prevalent trend in automation has been to physically separate the human from the machine so the human must always send information signals via an intermediary device (e.g., joystick, pushbutton, light switch). Extenders, however are perfect examples of self-powered machines that are built and controlled for the optimal exchange of power and information signals with humans. The human wearing the extender is in physical contact with the machine, so power transfer is unavoidable and information signals from the human help to control the machine. Commands are transferred to the extender via the contact forces and the EMG signals between the wearer and the extender. The extender augments human motor ability without accepting any explicit commands: it accepts the EMG signals and the contact force between the person's arm and the extender, and the extender 'translates' them into a desired position. In this unique configuration, mechanical power transfer between the human and the extender occurs because the human is pushing against the extender. The extender transfers to the human's hand, in feedback fashion, a scaled-down version of the actual external load which the extender is manipulating. This natural feedback force on the human's hand allows him to 'feel' a modified version of the external forces on the extender. The information signals from the human (e.g., EMG signals) to the computer reflect human cognitive ability, and the power transfer between the human and the machine (e.g., physical interaction) reflects human physical ability. Thus the information transfer to the machine augments cognitive ability, and the power transfer augments motor ability. These two actions are coupled through the human cognitive/motor dynamic behavior. The goal is to derive the control rules for a class of computer-controlled machines that augment human physical and cognitive abilities in certain manipulative tasks.

  19. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  20. Hybrid Exploration Agent Platform and Sensor Web System

    NASA Technical Reports Server (NTRS)

    Stoffel, A. William; VanSteenberg, Michael E.

    2004-01-01

    A sensor web to collect the scientific data needed to further exploration is a major and efficient asset to any exploration effort. This is true not only for lunar and planetary environments, but also for interplanetary and liquid environments. Such a system would also have myriad direct commercial spin-off applications. The Hybrid Exploration Agent Platform and Sensor Web or HEAP-SW like the ANTS concept is a Sensor Web concept. The HEAP-SW is conceptually and practically a very different system. HEAP-SW is applicable to any environment and a huge range of exploration tasks. It is a very robust, low cost, high return, solution to a complex problem. All of the technology for initial development and implementation is currently available. The HEAP Sensor Web or HEAP-SW consists of three major parts, The Hybrid Exploration Agent Platforms or HEAP, the Sensor Web or SW and the immobile Data collection and Uplink units or DU. The HEAP-SW as a whole will refer to any group of mobile agents or robots where each robot is a mobile data collection unit that spends most of its time acting in concert with all other robots, DUs in the web, and the HEAP-SWs overall Command and Control (CC) system. Each DU and robot is, however, capable of acting independently. The three parts of the HEAP-SW system are discussed in this paper. The Goals of the HEAP-SW system are: 1) To maximize the amount of exploration enhancing science data collected; 2) To minimize data loss due to system malfunctions; 3) To minimize or, possibly, eliminate the risk of total system failure; 4) To minimize the size, weight, and power requirements of each HEAP robot; 5) To minimize HEAP-SW system costs. The rest of this paper discusses how these goals are attained.

Top