Sample records for sensor-based robot control

  1. Method and System for Controlling a Dexterous Robot Execution Sequence Using State Classification

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Quillin, Nathaniel (Inventor); Platt, Robert J., Jr. (Inventor); Pfeiffer, Joseph (Inventor); Permenter, Frank Noble (Inventor)

    2014-01-01

    A robotic system includes a dexterous robot and a controller. The robot includes a plurality of robotic joints, actuators for moving the joints, and sensors for measuring a characteristic of the joints, and for transmitting the characteristics as sensor signals. The controller receives the sensor signals, and is configured for executing instructions from memory, classifying the sensor signals into distinct classes via the state classification module, monitoring a system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the system state. A method for controlling the robot in the above system includes receiving the signals via the controller, classifying the signals using the state classification module, monitoring the present system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the present system state.

  2. Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    PubMed Central

    Garcia, Gabriel J.; Corrales, Juan A.; Pomares, Jorge; Torres, Fernando

    2009-01-01

    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors. PMID:22303146

  3. Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung

    2018-03-05

    In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.

  4. An interactive control algorithm used for equilateral triangle formation with robotic sensors.

    PubMed

    Li, Xiang; Chen, Hongcai

    2014-04-22

    This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs.

  5. An Interactive Control Algorithm Used for Equilateral Triangle Formation with Robotic Sensors

    PubMed Central

    Li, Xiang; Chen, Hongcai

    2014-01-01

    This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs. PMID:24759118

  6. Integration of a sensor based multiple robot environment for space applications: The Johnson Space Center Teleoperator Branch Robotics Laboratory

    NASA Technical Reports Server (NTRS)

    Hwang, James; Campbell, Perry; Ross, Mike; Price, Charles R.; Barron, Don

    1989-01-01

    An integrated operating environment was designed to incorporate three general purpose robots, sensors, and end effectors, including Force/Torque Sensors, Tactile Array sensors, Tactile force sensors, and Force-sensing grippers. The design and implementation of: (1) the teleoperation of a general purpose PUMA robot; (2) an integrated sensor hardware/software system; (3) the force-sensing gripper control; (4) the host computer system for dual Robotic Research arms; and (5) the Ethernet integration are described.

  7. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  8. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  9. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration

    PubMed Central

    Losada, Diego P.; Fernández, Joaquín L.; Paz, Enrique; Sanz, Rafael

    2017-01-01

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead. PMID:28467381

  10. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration.

    PubMed

    Losada, Diego P; Fernández, Joaquín L; Paz, Enrique; Sanz, Rafael

    2017-05-03

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead.

  11. The Design of Artificial Intelligence Robot Based on Fuzzy Logic Controller Algorithm

    NASA Astrophysics Data System (ADS)

    Zuhrie, M. S.; Munoto; Hariadi, E.; Muslim, S.

    2018-04-01

    Artificial Intelligence Robot is a wheeled robot driven by a DC motor that moves along the wall using an ultrasonic sensor as a detector of obstacles. This study uses ultrasonic sensors HC-SR04 to measure the distance between the robot with the wall based ultrasonic wave. This robot uses Fuzzy Logic Controller to adjust the speed of DC motor. When the ultrasonic sensor detects a certain distance, sensor data is processed on ATmega8 then the data goes to ATmega16. From ATmega16, sensor data is calculated based on Fuzzy rules to drive DC motor speed. The program used to adjust the speed of a DC motor is CVAVR program (Code Vision AVR). The readable distance of ultrasonic sensor is 3 cm to 250 cm with response time 0.5 s. Testing of robots on walls with a setpoint value of 9 cm to 10 cm produce an average error value of -12% on the wall of L, -8% on T walls, -8% on U wall, and -1% in square wall.

  12. Computer hardware and software for robotic control

    NASA Technical Reports Server (NTRS)

    Davis, Virgil Leon

    1987-01-01

    The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.

  13. Experimental Robot Position Sensor Fault Tolerance Using Accelerometers and Joint Torque Sensors

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. The proposed method uses joint torque sensors found in most existing advanced robot designs along with easily locatable, lightweight accelerometers to provide a joint position sensor fault recovery mode. This mode uses the torque sensors along with a virtual passive control law for stability and accelerometers for joint position information. Two methods for conversion from Cartesian acceleration to joint position based on robot kinematics, not integration, are presented. The fault tolerant control method was tested on several joints of a laboratory robot. The controllers performed well with noisy, biased data and a model with uncertain parameters.

  14. Robustness of a distributed neural network controller for locomotion in a hexapod robot

    NASA Technical Reports Server (NTRS)

    Chiel, Hillel J.; Beer, Randall D.; Quinn, Roger D.; Espenschied, Kenneth S.

    1992-01-01

    A distributed neural-network controller for locomotion, based on insect neurobiology, has been used to control a hexapod robot. How robust is this controller? Disabling any single sensor, effector, or central component did not prevent the robot from walking. Furthermore, statically stable gaits could be established using either sensor input or central connections. Thus, a complex interplay between central neural elements and sensor inputs is responsible for the robustness of the controller and its ability to generate a continuous range of gaits. These results suggest that biologically inspired neural-network controllers may be a robust method for robotic control.

  15. FPGA-based fused smart sensor for dynamic and vibration parameter extraction in industrial robot links.

    PubMed

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).

  16. FPGA-Based Fused Smart Sensor for Dynamic and Vibration Parameter Extraction in Industrial Robot Links

    PubMed Central

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345

  17. CHIMERA II - A real-time multiprocessing environment for sensor-based robot control

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1989-01-01

    A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.

  18. Simulation and animation of sensor-driven robots.

    PubMed

    Chen, C; Trivedi, M M; Bidlack, C R

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.

  19. Robust Operation of Tendon-Driven Robot Fingers Using Force and Position-Based Control Laws

    NASA Technical Reports Server (NTRS)

    Hargrave, Brian (Inventor); Abdallah, Muhammad E (Inventor); Reiland, Matthew J (Inventor); Diftler, Myron A (Inventor); Strawser, Philip A (Inventor); Platt, Jr., Robert J. (Inventor); Ihrke, Chris A. (Inventor)

    2013-01-01

    A robotic system includes a tendon-driven finger and a control system. The system controls the finger via a force-based control law when a tension sensor is available, and via a position-based control law when a sensor is not available. Multiple tendons may each have a corresponding sensor. The system selectively injects a compliance value into the position-based control law when only some sensors are available. A control system includes a host machine and a non-transitory computer-readable medium having a control process, which is executed by the host machine to control the finger via the force- or position-based control law. A method for controlling the finger includes determining the availability of a tension sensor(s), and selectively controlling the finger, using the control system, via the force or position-based control law. The position control law allows the control system to resist disturbances while nominally maintaining the initial state of internal tendon tensions.

  20. Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.

    PubMed

    Felt, Wyatt; Chin, Khai Yi; Remy, C David

    2017-09-01

    This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.

  1. Simulation and animation of sensor-driven robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, C.; Trivedi, M.M.; Bidlack, C.R.

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aide the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the usersmore » visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.« less

  2. Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case

    PubMed Central

    Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique

    2017-01-01

    This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144

  3. Obstacle negotiation control for a mobile robot suspended on overhead ground wires by optoelectronic sensors

    NASA Astrophysics Data System (ADS)

    Zheng, Li; Yi, Ruan

    2009-11-01

    Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.

  4. Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)

    NASA Technical Reports Server (NTRS)

    Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey

    1990-01-01

    Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.

  5. Robotics technology discipline

    NASA Technical Reports Server (NTRS)

    Montemerlo, Melvin D.

    1990-01-01

    Viewgraphs on robotics technology discipline for Space Station Freedom are presented. Topics covered include: mechanisms; sensors; systems engineering processes for integrated robotics; man/machine cooperative control; 3D-real-time machine perception; multiple arm redundancy control; manipulator control from a movable base; multi-agent reasoning; and surfacing evolution technologies.

  6. A neuro-inspired spike-based PID motor controller for multi-motor robots with low cost FPGAs.

    PubMed

    Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J; Paz-Vicente, Rafael; Civit-Balcells, Anton

    2012-01-01

    In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control.

  7. A Neuro-Inspired Spike-Based PID Motor Controller for Multi-Motor Robots with Low Cost FPGAs

    PubMed Central

    Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J.; Paz-Vicente, Rafael; Civit-Balcells, Anton

    2012-01-01

    In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control. PMID:22666004

  8. Virtual Passive Controller for Robot Systems Using Joint Torque Sensors

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    This paper presents a control method based on virtual passive dynamic control that will stabilize a robot manipulator using joint torque sensors and a simple joint model. The method does not require joint position or velocity feedback for stabilization. The proposed control method is stable in the sense of Lyaponov. The control method was implemented on several joints of a laboratory robot. The controller showed good stability robustness to system parameter error and to the exclusion of nonlinear dynamic effects on the joints. The controller enhanced position tracking performance and, in the absence of position control, dissipated joint energy.

  9. Serendipitous Offline Learning in a Neuromorphic Robot.

    PubMed

    Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg

    2016-01-01

    We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.

  10. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  11. Design, implementation and evaluation of an independent real-time safety layer for medical robotic systems using a force-torque-acceleration (FTA) sensor.

    PubMed

    Richter, Lars; Bruder, Ralf

    2013-05-01

    Most medical robotic systems require direct interaction or contact with the robot. Force-Torque (FT) sensors can easily be mounted to the robot to control the contact pressure. However, evaluation is often done in software, which leads to latencies. To overcome that, we developed an independent safety system, named FTA sensor, which is based on an FT sensor and an accelerometer. An embedded system (ES) runs a real-time monitoring system for continuously checking of the readings. In case of a collision or error, it instantaneously stops the robot via the robot's external emergency stop. We found that the ES implementing the FTA sensor has a maximum latency of [Formula: see text] ms to trigger the robot's emergency stop. For the standard settings in the application of robotized transcranial magnetic stimulation, the robot will stop after at most 4 mm. Therefore, it works as an independent safety layer preventing patient and/or operator from serious harm.

  12. Complete low-cost implementation of a teleoperated control system for a humanoid robot.

    PubMed

    Cela, Andrés; Yebes, J Javier; Arroyo, Roberto; Bergasa, Luis M; Barea, Rafael; López, Elena

    2013-01-24

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.

  13. Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot

    PubMed Central

    Cela, Andrés; Yebes, J. Javier; Arroyo, Roberto; Bergasa, Luis M.; Barea, Rafael; López, Elena

    2013-01-01

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. PMID:23348029

  14. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    NASA Astrophysics Data System (ADS)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  15. An overview on real-time control schemes for wheeled mobile robot

    NASA Astrophysics Data System (ADS)

    Radzak, M. S. A.; Ali, M. A. H.; Sha’amri, S.; Azwan, A. R.

    2018-04-01

    The purpose of this paper is to review real-time control motion algorithms for wheeled mobile robot (WMR) when navigating in environment such as road. Its need a good controller to avoid collision with any disturbance and maintain a track error at zero level. The controllers are used with other aiding sensors to measure the WMR’s velocities, posture, and interference to estimate the required torque to be applied on the wheels of mobile robot. Four main categories for wheeled mobile robot control systems have been found in literature which are namely: Kinematic based controller, Dynamic based controllers, artificial intelligence based control system, and Active Force control. A MATLAB/Simulink software is the main software to simulate and implement the control system. The real-time toolbox in MATLAB/SIMULINK are used to receive/send data from sensors/to actuator with presence of disturbances, however others software such C, C++ and visual basic are rare to be used.

  16. Study on robot motion control for intelligent welding processes based on the laser tracking sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Wang, Qian; Tang, Chen; Wang, Ju

    2017-06-01

    A robot motion control method is presented for intelligent welding processes of complex spatial free-form curve seams based on the laser tracking sensor. First, calculate the tip position of the welding torch according to the velocity of the torch and the seam trajectory detected by the sensor. Then, search the optimal pose of the torch under constraints using genetic algorithms. As a result, the intersection point of the weld seam and the laser plane of the sensor is within the detectable range of the sensor. Meanwhile, the angle between the axis of the welding torch and the tangent of the weld seam meets the requirements. The feasibility of the control method is proved by simulation.

  17. Control of a Quadcopter Aerial Robot Using Optic Flow Sensing

    NASA Astrophysics Data System (ADS)

    Hurd, Michael Brandon

    This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).

  18. Hand Gesture Based Wireless Robotic Arm Control for Agricultural Applications

    NASA Astrophysics Data System (ADS)

    Kannan Megalingam, Rajesh; Bandhyopadhyay, Shiva; Vamsy Vivek, Gedela; Juned Rahi, Muhammad

    2017-08-01

    One of the major challenges in agriculture is harvesting. It is very hard and sometimes even unsafe for workers to go to each plant and pluck fruits. Robotic systems are increasingly combined with new technologies to automate or semi automate labour intensive work, such as e.g. grape harvesting. In this work we propose a semi-automatic method for aid in harvesting fruits and hence increase productivity per man hour. A robotic arm fixed to a rover roams in the in orchard and the user can control it remotely using the hand glove fixed with various sensors. These sensors can position the robotic arm remotely to harvest the fruits. In this paper we discuss the design of hand glove fixed with various sensors, design of 4 DoF robotic arm and the wireless control interface. In addition the setup of the system and the testing and evaluation under lab conditions are also presented in this paper.

  19. Teleautonomous guidance for mobile robots

    NASA Technical Reports Server (NTRS)

    Borenstein, J.; Koren, Y.

    1990-01-01

    Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.

  20. Autonomous Mobile Platform for Research in Cooperative Robotics

    NASA Technical Reports Server (NTRS)

    Daemi, Ali; Pena, Edward; Ferguson, Paul

    1998-01-01

    This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.

  1. Soft Pushing Operation with Dual Compliance Controllers Based on Estimated Torque and Visual Force

    NASA Astrophysics Data System (ADS)

    Muis, Abdul; Ohnishi, Kouhei

    Sensor fusion extends robot ability to perform more complex tasks. An interesting application in such an issue is pushing operation, in which through multi-sensor, the robot moves an object by pushing it. Generally, a pushing operation consists of “approaching, touching, and pushing"(1). However, most researches in this field are dealing with how the pushed object follows the predefined trajectory. In which, the implication as the robot body or the tool-tip hits an object is neglected. Obviously on collision, the robot momentum may crash sensor, robot's surface or even the object. For that reason, this paper proposes a soft pushing operation with dual compliance controllers. Mainly, a compliance control is a control system with trajectory compensation so that the external force may be followed. In this paper, the first compliance controller is driven by estimated external force based on reaction torque observer(2), which compensates contact sensation. The other one compensates non-contact sensation. Obviously, a contact sensation, acquired from force sensor either reaction torque observer of an object, is measurable once the robot touched the object. Therefore, a non-contact sensation is introduced before touching an object, which is realized with visual sensor in this paper. Here, instead of using visual information as command reference, the visual information such as depth, is treated as virtual force for the second compliance controller. Thus, having contact and non-contact sensation, the robot will be compliant with wider sensation. This paper considers a heavy mobile manipulator and a heavy object, which have significant momentum on touching stage. A chopstick is attached on the object side to show the effectiveness of the proposed method. Here, both compliance controllers adjust the mobile manipulator command reference to provide soft pushing operation. Finally, the experimental result shows the validity of the proposed method.

  2. Energy optimization in mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Yu, Shengwei

    Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.

  3. A Low Cost Mobile Robot Based on Proportional Integral Derivative (PID) Control System and Odometer for Education

    NASA Astrophysics Data System (ADS)

    Haq, R.; Prayitno, H.; Dzulkiflih; Sucahyo, I.; Rahmawati, E.

    2018-03-01

    In this article, the development of a low cost mobile robot based on PID controller and odometer for education is presented. PID controller and odometer is applied for controlling mobile robot position. Two-dimensional position vector in cartesian coordinate system have been inserted to robot controller as an initial and final position. Mobile robot has been made based on differential drive and sensor magnetic rotary encoder which measured robot position from a number of wheel rotation. Odometry methode use data from actuator movements for predicting change of position over time. The mobile robot is examined to get final position with three different heading angle 30°, 45° and 60° by applying various value of KP, KD and KI constant.

  4. Neuro-Inspired Spike-Based Motion: From Dynamic Vision Sensor to Robot Motor Open-Loop Control through Spike-VITE

    PubMed Central

    Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan

    2013-01-01

    In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330

  5. Neuro-inspired spike-based motion: from dynamic vision sensor to robot motor open-loop control through spike-VITE.

    PubMed

    Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan

    2013-11-20

    In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation.

  6. A Fabry-Perot Interferometry Based MRI-Compatible Miniature Uniaxial Force Sensor for Percutaneous Needle Placement

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Furlong, Cosme; Fischer, Gregory S.

    2014-01-01

    Robot-assisted surgical procedures, taking advantage of the high soft tissue contrast and real-time imaging of magnetic resonance imaging (MRI), are developing rapidly. However, it is crucial to maintain tactile force feedback in MRI-guided needle-based procedures. This paper presents a Fabry-Perot interference (FPI) based system of an MRI-compatible fiber optic sensor which has been integrated into a piezoelectrically actuated robot for prostate cancer biopsy and brachytherapy in 3T MRI scanner. The opto-electronic sensing system design was minimized to fit inside an MRI-compatible robot controller enclosure. A flexure mechanism was designed that integrates the FPI sensor fiber for measuring needle insertion force, and finite element analysis was performed for optimizing the correct force-deformation relationship. The compact, low-cost FPI sensing system was integrated into the robot and calibration was conducted. The root mean square (RMS) error of the calibration among the range of 0–10 Newton was 0.318 Newton comparing to the theoretical model which has been proven sufficient for robot control and teleoperation. PMID:25126153

  7. A plant-inspired robot with soft differential bending capabilities.

    PubMed

    Sadeghi, A; Mondini, A; Del Dottore, E; Mattoli, V; Beccai, L; Taccola, S; Lucarotti, C; Totaro, M; Mazzolai, B

    2016-12-20

    We present the design and development of a plant-inspired robot, named Plantoid, with sensorized robotic roots. Natural roots have a multi-sensing capability and show a soft bending behaviour to follow or escape from various environmental parameters (i.e., tropisms). Analogously, we implement soft bending capabilities in our robotic roots by designing and integrating soft spring-based actuation (SSBA) systems using helical springs to transmit the motor power in a compliant manner. Each robotic tip integrates four different sensors, including customised flexible touch and innovative humidity sensors together with commercial gravity and temperature sensors. We show how the embedded sensing capabilities together with a root-inspired control algorithm lead to the implementation of tropic behaviours. Future applications for such plant-inspired technologies include soil monitoring and exploration, useful for agriculture and environmental fields.

  8. A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring

    PubMed Central

    Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio

    2016-01-01

    This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505

  9. Molecular robots with sensors and intelligence.

    PubMed

    Hagiya, Masami; Konagaya, Akihiko; Kobayashi, Satoshi; Saito, Hirohide; Murata, Satoshi

    2014-06-17

    CONSPECTUS: What we can call a molecular robot is a set of molecular devices such as sensors, logic gates, and actuators integrated into a consistent system. The molecular robot is supposed to react autonomously to its environment by receiving molecular signals and making decisions by molecular computation. Building such a system has long been a dream of scientists; however, despite extensive efforts, systems having all three functions (sensing, computation, and actuation) have not been realized yet. This Account introduces an ongoing research project that focuses on the development of molecular robotics funded by MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan). This 5 year project started in July 2012 and is titled "Development of Molecular Robots Equipped with Sensors and Intelligence". The major issues in the field of molecular robotics all correspond to a feedback (i.e., plan-do-see) cycle of a robotic system. More specifically, these issues are (1) developing molecular sensors capable of handling a wide array of signals, (2) developing amplification methods of signals to drive molecular computing devices, (3) accelerating molecular computing, (4) developing actuators that are controllable by molecular computers, and (5) providing bodies of molecular robots encapsulating the above molecular devices, which implement the conformational changes and locomotion of the robots. In this Account, the latest contributions to the project are reported. There are four research teams in the project that specialize on sensing, intelligence, amoeba-like actuation, and slime-like actuation, respectively. The molecular sensor team is focusing on the development of molecular sensors that can handle a variety of signals. This team is also investigating methods to amplify signals from the molecular sensors. The molecular intelligence team is developing molecular computers and is currently focusing on a new photochemical technology for accelerating DNA-based computations. They also introduce novel computational models behind various kinds of molecular computers necessary for designing such computers. The amoeba robot team aims at constructing amoeba-like robots. The team is trying to incorporate motor proteins, including kinesin and microtubules (MTs), for use as actuators implemented in a liposomal compartment as a robot body. They are also developing a methodology to link DNA-based computation and molecular motor control. The slime robot team focuses on the development of slime-like robots. The team is evaluating various gels, including DNA gel and BZ gel, for use as actuators, as well as the body material to disperse various molecular devices in it. They also try to control the gel actuators by DNA signals coming from molecular computers.

  10. A Tree Based Self-routing Scheme for Mobility Support in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kim, Young-Duk; Yang, Yeon-Mo; Kang, Won-Seok; Kim, Jin-Wook; An, Jinung

    Recently, WSNs (Wireless Sensor Networks) with mobile robot is a growing technology that offer efficient communication services for anytime and anywhere applications. However, the tiny sensor node has very limited network resources due to its low battery power, low data rate, node mobility, and channel interference constraint between neighbors. Thus, in this paper, we proposed a tree based self-routing protocol for autonomous mobile robots based on beacon mode and implemented in real test-bed environments. The proposed scheme offers beacon based real-time scheduling for reliable association process between parent and child nodes. In addition, it supports smooth handover procedure by reducing flooding overhead of control packets. Throughout the performance evaluation by using a real test-bed system and simulation, we illustrate that our proposed scheme demonstrates promising performance for wireless sensor networks with mobile robots.

  11. Interaction force and motion estimators facilitating impedance control of the upper limb rehabilitation robot.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Bengoa, Pablo; Jung, Je Hyung

    2017-07-01

    In order to enhance the performance of rehabilitation robots, it is imperative to know both force and motion caused by the interaction between user and robot. However, common direct measurement of both signals through force and motion sensors not only increases the complexity of the system but also impedes affordability of the system. As an alternative of the direct measurement, in this work, we present new force and motion estimators for the proper control of the upper-limb rehabilitation Universal Haptic Pantograph (UHP) robot. The estimators are based on the kinematic and dynamic model of the UHP and the use of signals measured by means of common low-cost sensors. In order to demonstrate the effectiveness of the estimators, several experimental tests were carried out. The force and impedance control of the UHP was implemented first by directly measuring the interaction force using accurate extra sensors and the robot performance was compared to the case where the proposed estimators replace the direct measured values. The experimental results reveal that the controller based on the estimators has similar performance to that using direct measurement (less than 1 N difference in root mean square error between two cases), indicating that the proposed force and motion estimators can facilitate implementation of interactive controller for the UHP in robotmediated rehabilitation trainings.

  12. Positive position control of robotic manipulators

    NASA Technical Reports Server (NTRS)

    Baz, A.; Gumusel, L.

    1989-01-01

    The present, simple and accurate position-control algorithm, which is applicable to fast-moving and lightly damped robot arms, is based on the positive position feedback (PPF) strategy and relies solely on position sensors to monitor joint angles of robotic arms to furnish stable position control. The optimized tuned filters, in the form of a set of difference equations, manipulate position signals for robotic system performance. Attention is given to comparisons between this PPF-algorithm controller's experimentally ascertained performance characteristics and those of a conventional proportional controller.

  13. Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery.

    PubMed

    Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell

    2011-06-01

    This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.

  14. Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery

    PubMed Central

    Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell

    2013-01-01

    This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information. PMID:24398557

  15. A small, cheap, and portable reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey

    2005-05-01

    While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.

  16. Computing Optic Flow with ArduEye Vision Sensor

    DTIC Science & Technology

    2013-01-01

    processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the

  17. A focused bibliography on robotics

    NASA Astrophysics Data System (ADS)

    Mergler, H. W.

    1983-08-01

    The present bibliography focuses on eight robotics-related topics believed by the author to be of special interest to researchers in the field of industrial electronics: robots, sensors, kinematics, dynamics, control systems, actuators, vision, economics, and robot applications. This literature search was conducted through the 1970-present COMPENDEX data base, which provides world-wide coverage of nearly 3500 journals, conference proceedings and reports, and the 1969-1981 INSPEC data base, which is the largest for the English language in the fields of physics, electrotechnology, computers, and control.

  18. The mechanical design of a humanoid robot with flexible skin sensor for use in psychiatric therapy

    NASA Astrophysics Data System (ADS)

    Burns, Alec; Tadesse, Yonas

    2014-03-01

    In this paper, a humanoid robot is presented for ultimate use in the rehabilitation of children with mental disorders, such as autism. Creating affordable and efficient humanoids could assist the therapy in psychiatric disability by offering multimodal communication between the humanoid and humans. Yet, the humanoid development needs a seamless integration of artificial muscles, sensors, controllers and structures. We have designed a human-like robot that has 15 DOF, 580 mm tall and 925 mm arm span using a rapid prototyping system. The robot has a human-like appearance and movement. Flexible sensors around the arm and hands for safe human-robot interactions, and a two-wheel mobile platform for maneuverability are incorporated in the design. The robot has facial features for illustrating human-friendly behavior. The mechanical design of the robot and the characterization of the flexible sensors are presented. Comprehensive study on the upper body design, mobile base, actuators selection, electronics, and performance evaluation are included in this paper.

  19. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots

    PubMed Central

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-01-01

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases. PMID:26528986

  20. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots.

    PubMed

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-10-30

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.

  1. Optimal Control Method of Robot End Position and Orientation Based on Dynamic Tracking Measurement

    NASA Astrophysics Data System (ADS)

    Liu, Dalong; Xu, Lijuan

    2018-01-01

    In order to improve the accuracy of robot pose positioning and control, this paper proposed a dynamic tracking measurement robot pose optimization control method based on the actual measurement of D-H parameters of the robot, the parameters is taken with feedback compensation of the robot, according to the geometrical parameters obtained by robot pose tracking measurement, improved multi sensor information fusion the extended Kalan filter method, with continuous self-optimal regression, using the geometric relationship between joint axes for kinematic parameters in the model, link model parameters obtained can timely feedback to the robot, the implementation of parameter correction and compensation, finally we can get the optimal attitude angle, realize the robot pose optimization control experiments were performed. 6R dynamic tracking control of robot joint robot with independent research and development is taken as experimental subject, the simulation results show that the control method improves robot positioning accuracy, and it has the advantages of versatility, simplicity, ease of operation and so on.

  2. Sensor Data Fusion for Body State Estimation in a Bipedal Robot and Its Feedback Control Application for Stable Walking

    PubMed Central

    Chen, Ching-Pei; Chen, Jing-Yi; Huang, Chun-Kai; Lu, Jau-Ching; Lin, Pei-Chun

    2015-01-01

    We report on a sensor data fusion algorithm via an extended Kalman filter for estimating the spatial motion of a bipedal robot. Through fusing the sensory information from joint encoders, a 6-axis inertial measurement unit and a 2-axis inclinometer, the robot’s body state at a specific fixed position can be yielded. This position is also equal to the CoM when the robot is in the standing posture suggested by the detailed CAD model of the robot. In addition, this body state is further utilized to provide sensory information for feedback control on a bipedal robot with walking gait. The overall control strategy includes the proposed body state estimator as well as the damping controller, which regulates the body position state of the robot in real-time based on instant and historical position tracking errors. Moreover, a posture corrector for reducing unwanted torque during motion is addressed. The body state estimator and the feedback control structure are implemented in a child-size bipedal robot and the performance is experimentally evaluated. PMID:25734644

  3. Characterization of large-area pressure sensitive robot skin

    NASA Astrophysics Data System (ADS)

    Saadatzi, Mohammad Nasser; Baptist, Joshua R.; Wijayasinghe, Indika B.; Popa, Dan O.

    2017-05-01

    Sensorized robot skin has considerable promise to enhance robots' tactile perception of surrounding environments. For physical human-robot interaction (pHRI) or autonomous manipulation, a high spatial sensor density is required, typically driven by the skin location on the robot. In our previous study, a 4x4 flexible array of strain sensors were printed and packaged onto Kapton sheets and silicone encapsulants. In this paper, we are extending the surface area of the patch to larger arrays with up to 128 tactel elements. To address scalability, sensitivity, and calibration challenges, a novel electronic module, free of the traditional signal conditioning circuitry was created. The electronic design relies on a software-based calibration scheme using high-resolution analog-to-digital converters with internal programmable gain amplifiers. In this paper, we first show the efficacy of the proposed method with a 4x4 skin array using controlled pressure tests, and then perform procedures to evaluate each sensor's characteristics such as dynamic force-to-strain property, repeatability, and signal-to-noise-ratio. In order to handle larger sensor surfaces, an automated force-controlled test cycle was carried out. Results demonstrate that our approach leads to reliable and efficient methods for extracting tactile models for use in future interaction with collaborative robots.

  4. Semi autonomous mine detection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas Few; Roelof Versteeg; Herman Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIKmore » was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.« less

  5. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System

    PubMed Central

    Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan

    2017-01-01

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964

  6. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System.

    PubMed

    Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan

    2017-09-10

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

  7. Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.

    PubMed

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-02-21

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.

  8. Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW

    PubMed Central

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-01-01

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578

  9. Intelligent lead: a novel HRI sensor for guide robots.

    PubMed

    Cho, Keum-Bae; Lee, Beom-Hee

    2012-01-01

    This paper addresses the introduction of a new Human Robot Interaction (HRI) sensor for guide robots. Guide robots for geriatric patients or the visually impaired should follow user's control command, keeping a certain desired distance allowing the user to work freely. Therefore, it is necessary to acquire control commands and a user's position on a real-time basis. We suggest a new sensor fusion system to achieve this objective and we will call this sensor the "intelligent lead". The objective of the intelligent lead is to acquire a stable distance from the user to the robot, speed-control volume and turn-control volume, even when the robot platform with the intelligent lead is shaken on uneven ground. In this paper we explain a precise Extended Kalman Filter (EKF) procedure for this. The intelligent lead physically consists of a Kinect sensor, the serial linkage attached with eight rotary encoders, and an IMU (Inertial Measurement Unit) and their measurements are fused by the EKF. A mobile robot was designed to test the performance of the proposed sensor system. After installing the intelligent lead in the mobile robot, several tests are conducted to verify that the mobile robot with the intelligent lead is capable of achieving its goal points while maintaining the appropriate distance between the robot and the user. The results show that we can use the intelligent lead proposed in this paper as a new HRI sensor joined a joystick and a distance measure in the mobile environments such as the robot and the user are moving at the same time.

  10. TRICCS: A proposed teleoperator/robot integrated command and control system for space applications

    NASA Technical Reports Server (NTRS)

    Will, R. W.

    1985-01-01

    Robotic systems will play an increasingly important role in space operations. An integrated command and control system based on the requirements of space-related applications and incorporating features necessary for the evolution of advanced goal-directed robotic systems is described. These features include: interaction with a world model or domain knowledge base, sensor feedback, multiple-arm capability and concurrent operations. The system makes maximum use of manual interaction at all levels for debug, monitoring, and operational reliability. It is shown that the robotic command and control system may most advantageously be implemented as packages and tasks in Ada.

  11. A review of physical security robotics at Sandia National Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roerig, S.C.

    1990-01-01

    As an outgrowth of research into physical security technologies, Sandia is investigating the role of robotics in security systems. Robotics may allow more effective utilization of guard forces, especially in scenarios where personnel would be exposed to harmful environments. Robots can provide intrusion detection and assessment functions for failed sensors or transient assets, can test existing fixed site sensors, and can gather additional intelligence and dispense delaying elements. The Robotic Security Vehicle (RSV) program for DOE/OSS is developing a fieldable prototype for an exterior physical security robot based upon a commercial four wheel drive vehicle. The RSV will be capablemore » of driving itself, being driven remotely, or being driven by an onboard operator around a site and will utilize its sensors to alert an operator to unusual conditions. The Remote Security Station (RSS) program for the Defense Nuclear Agency is developing a proof-of-principle robotic system which will be used to evaluate the role, and associated cost, of robotic technologies in exterior security systems. The RSS consists of an independent sensor pod, a mobile sensor platform and a control and display console. Sensor data fusion is used to optimize the system's intrusion detection performance. These programs are complementary, the RSV concentrates on developing autonomous mobility, while the RSS thrust is on mobile sensor employment. 3 figs.« less

  12. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    PubMed Central

    Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro

    2014-01-01

    This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636

  13. Progress in Insect-Inspired Optical Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Chahl, Javaan; Zometzer, Steve

    2005-01-01

    Progress has been made in continuing efforts to develop optical flight-control and navigation sensors for miniature robotic aircraft. The designs of these sensors are inspired by the designs and functions of the vision systems and brains of insects. Two types of sensors of particular interest are polarization compasses and ocellar horizon sensors. The basic principle of polarization compasses was described (but without using the term "polarization compass") in "Insect-Inspired Flight Control for Small Flying Robots" (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate: Bees use sky polarization patterns in ultraviolet (UV) light, caused by Rayleigh scattering of sunlight by atmospheric gas molecules, as direction references relative to the apparent position of the Sun. A robotic direction-finding technique based on this concept would be more robust in comparison with a technique based on the direction to the visible Sun because the UV polarization pattern is distributed across the entire sky and, hence, is redundant and can be extrapolated from a small region of clear sky in an elsewhere cloudy sky that hides the Sun.

  14. Mobile robots exploration through cnn-based reinforcement learning.

    PubMed

    Tai, Lei; Liu, Ming

    2016-01-01

    Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.

  15. Determining robot actions for tasks requiring sensor interaction

    NASA Technical Reports Server (NTRS)

    Budenske, John; Gini, Maria

    1989-01-01

    The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.

  16. Comparison of Piezoresistive Monofilament Polymer Sensors

    PubMed Central

    Melnykowycz, Mark; Koll, Birgit; Scharf, Dagobert; Clemens, Frank

    2014-01-01

    The development of flexible polymer monofilament fiber strain sensors have many applications in both wearable computing (clothing, gloves, etc.) and robotics design (large deformation control). For example, a high-stretch monofilament sensor could be integrated into robotic arm design, easily stretching over joints or along curved surfaces. As a monofilament, the sensor can be woven into or integrated with textiles for position or physiological monitoring, computer interface control, etc. Commercially available conductive polymer monofilament sensors were tested alongside monofilaments produced from carbon black (CB) mixed with a thermo-plastic elastomer (TPE) and extruded in different diameters. It was found that signal strength, drift, and precision characteristics were better with a 0.3 mm diameter CB/TPE monofilament than thick (∼2 mm diameter) based on the same material or commercial monofilaments based on natural rubber or silicone elastomer (SE) matrices. PMID:24419161

  17. Experimental Robot Model Adjustments Based on Force–Torque Sensor Information

    PubMed Central

    2018-01-01

    The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM) to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM), is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP). Thanks to the use of information from force–torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics. PMID:29534477

  18. Sensor fusion IV: Control paradigms and data structures; Proceedings of the Meeting, Boston, MA, Nov. 12-15, 1991

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.

  19. Problems and research issues associated with the hybrid control of force and displacement

    NASA Technical Reports Server (NTRS)

    Paul, R. P.

    1987-01-01

    The hybrid control of force and position is basic to the science of robotics but is only poorly understood. Before much progress can be made in robotics, this problem needs to be solved in a robust manner. However, the use of hybrid control implies the existence of a model of the environment, not an exact model (as the function of hybrid control is to accommodate these errors), but a model appropriate for planning and reasoning. The monitored forces in position control are interpreted in terms of a model of the task as are the monitored displacements in force control. The reaction forces of the task of writing are far different from those of hammering. The programming of actions in such a modeled world becomes more complicated and systems of task level programming need to be developed. Sensor based robotics, of which force sensing is the most basic, implies an entirely new level of technology. Indeed, robot force sensors, no matter how compliant they may be, must be protected from accidental collisions. This implies other sensors to monitor task execution and again the use of a world model. This new level of technology is the task level, in which task actions are specified, not the actions of individual sensors and manipulators.

  20. Integrating Software Modules For Robot Control

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.; Khosla, Pradeep; Stewart, David B.

    1993-01-01

    Reconfigurable, sensor-based control system uses state variables in systematic integration of reusable control modules. Designed for open-architecture hardware including many general-purpose microprocessors, each having own local memory plus access to global shared memory. Implemented in software as extension of Chimera II real-time operating system. Provides transparent computing mechanism for intertask communication between control modules and generic process-module architecture for multiprocessor realtime computation. Used to control robot arm. Proves useful in variety of other control and robotic applications.

  1. Piezoresistive pressure sensor array for robotic skin

    NASA Astrophysics Data System (ADS)

    Mirza, Fahad; Sahasrabuddhe, Ritvij R.; Baptist, Joshua R.; Wijesundara, Muthu B. J.; Lee, Woo H.; Popa, Dan O.

    2016-05-01

    Robots are starting to transition from the confines of the manufacturing floor to homes, schools, hospitals, and highly dynamic environments. As, a result, it is impossible to foresee all the probable operational situations of robots, and preprogram the robot behavior in those situations. Among human-robot interaction technologies, haptic communication is an intuitive physical interaction method that can help define operational behaviors for robots cooperating with humans. Multimodal robotic skin with distributed sensors can help robots increase perception capabilities of their surrounding environments. Electro-Hydro-Dynamic (EHD) printing is a flexible multi-modal sensor fabrication method because of its direct printing capability of a wide range of materials onto substrates with non-uniform topographies. In past work we designed interdigitated comb electrodes as a sensing element and printed piezoresistive strain sensors using customized EHD printable PEDOT:PSS based inks. We formulated a PEDOT:PSS derivative ink, by mixing PEDOT:PSS and DMSO. Bending induced characterization tests of prototyped sensors showed high sensitivity and sufficient stability. In this paper, we describe SkinCells, robot skin sensor arrays integrated with electronic modules. 4x4 EHD-printed arrays of strain sensors was packaged onto Kapton sheets and silicone encapsulant and interconnected to a custom electronic module that consists of a microcontroller, Wheatstone bridge with adjustable digital potentiometer, multiplexer, and serial communication unit. Thus, SkinCell's electronics can be used for signal acquisition, conditioning, and networking between sensor modules. Several SkinCells were loaded with controlled pressure, temperature and humidity testing apparatuses, and testing results are reported in this paper.

  2. Scalable fabric tactile sensor arrays for soft bodies

    NASA Astrophysics Data System (ADS)

    Day, Nathan; Penaloza, Jimmy; Santos, Veronica J.; Killpack, Marc D.

    2018-06-01

    Soft robots have the potential to transform the way robots interact with their environment. This is due to their low inertia and inherent ability to more safely interact with the world without damaging themselves or the people around them. However, existing sensing for soft robots has at least partially limited their ability to control interactions with their environment. Tactile sensors could enable soft robots to sense interaction, but most tactile sensors are made from rigid substrates and are not well suited to applications for soft robots which can deform. In addition, the benefit of being able to cheaply manufacture soft robots may be lost if the tactile sensors that cover them are expensive and their resolution does not scale well for manufacturability. This paper discusses the development of a method to make affordable, high-resolution, tactile sensor arrays (manufactured in rows and columns) that can be used for sensorizing soft robots and other soft bodies. However, the construction results in a sensor array that exhibits significant amounts of cross-talk when two taxels in the same row are compressed. Using the same fabric-based tactile sensor array construction design, two different methods for cross-talk compensation are presented. The first uses a mathematical model to calculate a change in resistance of each taxel directly. The second method introduces additional simple circuit components that enable us to isolate each taxel electrically and relate voltage to force directly. Fabric sensor arrays are demonstrated for two different soft-bodied applications: an inflatable single link robot and a human wrist.

  3. The Modular Design and Production of an Intelligent Robot Based on a Closed-Loop Control Strategy.

    PubMed

    Zhang, Libo; Zhu, Junjie; Ren, Hao; Liu, Dongdong; Meng, Dan; Wu, Yanjun; Luo, Tiejian

    2017-10-14

    Intelligent robots are part of a new generation of robots that are able to sense the surrounding environment, plan their own actions and eventually reach their targets. In recent years, reliance upon robots in both daily life and industry has increased. The protocol proposed in this paper describes the design and production of a handling robot with an intelligent search algorithm and an autonomous identification function. First, the various working modules are mechanically assembled to complete the construction of the work platform and the installation of the robotic manipulator. Then, we design a closed-loop control system and a four-quadrant motor control strategy, with the aid of debugging software, as well as set steering gear identity (ID), baud rate and other working parameters to ensure that the robot achieves the desired dynamic performance and low energy consumption. Next, we debug the sensor to achieve multi-sensor fusion to accurately acquire environmental information. Finally, we implement the relevant algorithm, which can recognize the success of the robot's function for a given application. The advantage of this approach is its reliability and flexibility, as the users can develop a variety of hardware construction programs and utilize the comprehensive debugger to implement an intelligent control strategy. This allows users to set personalized requirements based on their needs with high efficiency and robustness.

  4. SpaceWire- Based Control System Architecture for the Lightweight Advanced Robotic Arm Demonstrator [LARAD

    NASA Astrophysics Data System (ADS)

    Rucinski, Marek; Coates, Adam; Montano, Giuseppe; Allouis, Elie; Jameux, David

    2015-09-01

    The Lightweight Advanced Robotic Arm Demonstrator (LARAD) is a state-of-the-art, two-meter long robotic arm for planetary surface exploration currently being developed by a UK consortium led by Airbus Defence and Space Ltd under contract to the UK Space Agency (CREST-2 programme). LARAD has a modular design, which allows for experimentation with different electronics and control software. The control system architecture includes the on-board computer, control software and firmware, and the communication infrastructure (e.g. data links, switches) connecting on-board computer(s), sensors, actuators and the end-effector. The purpose of the control system is to operate the arm according to pre-defined performance requirements, monitoring its behaviour in real-time and performing safing/recovery actions in case of faults. This paper reports on the results of a recent study about the feasibility of the development and integration of a novel control system architecture for LARAD fully based on the SpaceWire protocol. The current control system architecture is based on the combination of two communication protocols, Ethernet and CAN. The new SpaceWire-based control system will allow for improved monitoring and telecommanding performance thanks to higher communication data rate, allowing for the adoption of advanced control schemes, potentially based on multiple vision sensors, and for the handling of sophisticated end-effectors that require fine control, such as science payloads or robotic hands.

  5. Pre-shaping of the Fingertip of Robot Hand Covered with Net Structure Proximity Sensor

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Suzuki, Yosuke; Hasegawa, Hiroaki; Ming, Aiguo; Ishikawa, Masatoshi; Shimojo, Makoto

    To achieve skillful tasks with multi-fingered robot hands, many researchers have been working on sensor-based control of them. Vision sensors and tactile sensors are indispensable for the tasks, however, the correctness of the information from the vision sensors decreases as a robot hand approaches to a grasping object because of occlusion. This research aims to achieve seamless detection for reliable grasp by use of proximity sensors: correcting the positional error of the hand in vision-based approach, and contacting the fingertip in the posture for effective tactile sensing. In this paper, we propose a method for adjusting the posture of the fingertip to the surface of the object. The method applies “Net-Structure Proximity Sensor” on the fingertip, which can detect the postural error in the roll and pitch axes between the fingertip and the object surface. The experimental result shows that the postural error is corrected in the both axes even if the object dynamically rotates.

  6. A flexible slip sensor using triboelectric nanogenerator approach

    NASA Astrophysics Data System (ADS)

    Wang, Xudong; Liang, Jiaming; Xiao, Yuxiang; Wu, Yichuan; Deng, Yang; Wang, Xiaohao; Zhang, Min

    2018-03-01

    With the rapid development of robotic technology, tactile sensors for robots have gained great attention from academic and industry researchers. Tactile sensors for slip detection are essential for human-like steady control in dexterous robot hand. In this paper, we propose and demonstrate a flexible slip sensor based on triboelectric nanogenerator with a seesaw structure. The sensor is composed of two porous PDMS layers separated by an inverted trapezoid structure with a height of 500 μm. In order to customize the sensitivity of the sensor, porous PDMS was fabricated by mixing PDMS with deionized water thoroughly and then removing water with heat. Laser-induced porous graphene and aluminium are served as the pair of contact materials. To detect slip from different directions, two sets of the electrode pair were used. Experimental results show a distinct difference between static state and the moment when a slip happens was detected. In addition, the output voltage of the sensors increased as the increase of slip velocity from 0.25 mm/s to 2.5 mm/s. The flexible slip sensor proposed here shows the potential applications in smart robotics and prosthesis.

  7. Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies

    DTIC Science & Technology

    2006-07-01

    and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility

  8. Automated Robot Movement in the Mapped Area Using Fuzzy Logic for Wheel Chair Application

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Efendi, S.; Ramadhana, H.; Andayani, U.; Fahmi, F.

    2018-03-01

    The difficulties of the disabled to move make them unable to live independently. People with disabilities need supporting device to move from place to place. For that, we proposed a solution that can help people with disabilities to move from one room to another automatically. This study aims to create a wheelchair prototype in the form of a wheeled robot as a means to learn the automatic mobilization. The fuzzy logic algorithm was used to determine motion direction based on initial position, ultrasonic sensors reading in avoiding obstacles, infrared sensors reading as a black line reader for the wheeled robot to move smooth and smartphone as a mobile controller. As a result, smartphones with the Android operating system can control the robot using Bluetooth. Here Bluetooth technology can be used to control the robot from a maximum distance of 15 meters. The proposed algorithm was able to work stable for automatic motion determination based on initial position, and also able to modernize the wheelchair movement from one room to another automatically.

  9. 3D Printed Wearable Sensors with Liquid Metals for the Pose Detection of Snakelike Soft Robots.

    PubMed

    Zhou, Luyu; Gao, Qing; Zhan, Jun-Fu; Xie, Chao-Qi; Fu, Jianzhong; He, Yong

    2018-06-18

    Liquid metal-based flexible sensors, which utilize advanced liquid conductive material to serve as sensitive element, is emerging as a promising solution to measure large deformations. Nowadays, one of the biggest challenges for precise control of soft robots is the detection of their real time positions. Existing fabrication methods are unable to fabricate flexible sensors that match the shape of soft robots. In this report, we firstly described a novel 3D printed multi-function inductance flexible and stretchable sensor with liquid metals (LMs), which is capable of measuring both axial tension and curvature. This sensor is fabricated with a developed coaxial liquid metal 3D printer by co-printing of silicone rubber and LMs. Due to the solenoid shape, this sensor can be easily installed on snakelike soft robots and can accurately distinguish different degrees of tensile and bending deformation. We determined the structural parameters of the sensor and proved its excellent stability and reliability. As a demonstration, we used this sensor to measure the curvature of a finger and feedback the position of endoscope, a typical snakelike structure. Because of its bending deformation form consistent with the actual working status of the soft robot and unique shape, this sensor has better practical application prospects in the pose detection.

  10. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-12-26

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  11. Sensor-based fine telemanipulation for space robotics

    NASA Technical Reports Server (NTRS)

    Andrenucci, M.; Bergamasco, M.; Dario, P.

    1989-01-01

    The control of a multifingered hand slave in order to accurately exert arbitrary forces and impart small movements to a grasped object is, at present, a knotty problem in teleoperation. Although a number of articulated robotic hands have been proposed in the recent past for dexterous manipulation in autonomous robots, the possible use of such hands as slaves in teleoperated manipulation is hindered by the present lack of sensors in those hands, and (even if those sensors were available) by the inherent difficulty of transmitting to the master operator the complex sensations elicited by such sensors at the slave level. An analysis of different problems related to sensor-based telemanipulation is presented. The general sensory systems requirements for dexterous slave manipulators are pointed out and the description of a practical sensory system set-up for the developed robotic system is presented. The problem of feeding back to the human master operator stimuli that can be interpreted by his central nervous system as originated during real dexterous manipulation is then considered. Finally, some preliminary work aimed at developing an instrumented glove designed purposely for commanding the master operation and incorporating Kevlar tendons and tension sensors, is discussed.

  12. Distributed multirobot sensing and tracking: a behavior-based approach

    NASA Astrophysics Data System (ADS)

    Parker, Lynne E.

    1995-09-01

    An important issue that arises in the automation of many large-scale surveillance and reconnaissance tasks is that of tracking the movements of (or maintaining passive contact with) objects navigating in a bounded area of interest. Oftentimes in these problems, the area to be monitored will move over time or will not permit fixed sensors, thus requiring a team of mobile sensors--or robots--to monitor the area collectively. In these situations, the robots must not only have mechanisms for determining how to track objects and how to fuse information from neighboring robots, but they must also have distributed control strategies for ensuring that the entire area of interest is continually covered to the greatest extent possible. This paper focuses on the distributed control issue by describing a proposed decentralized control mechanism that allows a team of robots to collectively track and monitor objects in an uncluttered area of interest. The approach is based upon an extension to the ALLIANCE behavior-based architecture that generalizes from the domain of loosely-coupled, independent applications to the domain of strongly cooperative applications, in which the action selection of a robot is dependent upon the actions selected by its teammates. We conclude the paper be describing our ongoing implementation of the proposed approach on a team of four mobile robots.

  13. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  14. Intelligent, self-contained robotic hand

    DOEpatents

    Krutik, Vitaliy; Doo, Burt; Townsend, William T.; Hauptman, Traveler; Crowell, Adam; Zenowich, Brian; Lawson, John

    2007-01-30

    A robotic device has a base and at least one finger having at least two links that are connected in series on rotary joints with at least two degrees of freedom. A brushless motor and an associated controller are located at each joint to produce a rotational movement of a link. Wires for electrical power and communication serially connect the controllers in a distributed control network. A network operating controller coordinates the operation of the network, including power distribution. At least one, but more typically two to five, wires interconnect all the controllers through one or more joints. Motor sensors and external world sensors monitor operating parameters of the robotic hand. The electrical signal output of the sensors can be input anywhere on the distributed control network. V-grooves on the robotic hand locate objects precisely and assist in gripping. The hand is sealed, immersible and has electrical connections through the rotary joints for anodizing in a single dunk without masking. In various forms, this intelligent, self-contained, dexterous hand, or combinations of such hands, can perform a wide variety of object gripping and manipulating tasks, as well as locomotion and combinations of locomotion and gripping.

  15. Process for anodizing a robotic device

    DOEpatents

    Townsend, William T [Weston, MA

    2011-11-08

    A robotic device has a base and at least one finger having at least two links that are connected in series on rotary joints with at least two degrees of freedom. A brushless motor and an associated controller are located at each joint to produce a rotational movement of a link. Wires for electrical power and communication serially connect the controllers in a distributed control network. A network operating controller coordinates the operation of the network, including power distribution. At least one, but more typically two to five, wires interconnect all the controllers through one or more joints. Motor sensors and external world sensors monitor operating parameters of the robotic hand. The electrical signal output of the sensors can be input anywhere on the distributed control network. V-grooves on the robotic hand locate objects precisely and assist in gripping. The hand is sealed, immersible and has electrical connections through the rotary joints for anodizing in a single dunk without masking. In various forms, this intelligent, self-contained, dexterous hand, or combinations of such hands, can perform a wide variety of object gripping and manipulating tasks, as well as locomotion and combinations of locomotion and gripping.

  16. A remote assessment system with a vision robot and wearable sensors.

    PubMed

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  17. Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots

    PubMed Central

    Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles

    2017-01-01

    The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software. PMID:28832510

  18. Biologically Inspired SNN for Robot Control.

    PubMed

    Nichols, Eric; McDaid, Liam J; Siddique, Nazmul

    2013-02-01

    This paper proposes a spiking-neural-network-based robot controller inspired by the control structures of biological systems. Information is routed through the network using facilitating dynamic synapses with short-term plasticity. Learning occurs through long-term synaptic plasticity which is implemented using the temporal difference learning rule to enable the robot to learn to associate the correct movement with the appropriate input conditions. The network self-organizes to provide memories of environments that the robot encounters. A Pioneer robot simulator with laser and sonar proximity sensors is used to verify the performance of the network with a wall-following task, and the results are presented.

  19. Robot Position Sensor Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.

  20. PSD Camera Based Position and Posture Control of Redundant Robot Considering Contact Motion

    NASA Astrophysics Data System (ADS)

    Oda, Naoki; Kotani, Kentaro

    The paper describes a position and posture controller design based on the absolute position by external PSD vision sensor for redundant robot manipulator. The redundancy enables a potential capability to avoid obstacle while continuing given end-effector jobs under contact with middle link of manipulator. Under contact motion, the deformation due to joint torsion obtained by comparing internal and external position sensor, is actively suppressed by internal/external position hybrid controller. The selection matrix of hybrid loop is given by the function of the deformation. And the detected deformation is also utilized in the compliant motion controller for passive obstacle avoidance. The validity of the proposed method is verified by several experimental results of 3link planar redundant manipulator.

  1. Cooperative Robots to Observe Moving Targets: Review.

    PubMed

    Khan, Asif; Rinner, Bernhard; Cavallaro, Andrea

    2018-01-01

    The deployment of multiple robots for achieving a common goal helps to improve the performance, efficiency, and/or robustness in a variety of tasks. In particular, the observation of moving targets is an important multirobot application that still exhibits numerous open challenges, including the effective coordination of the robots. This paper reviews control techniques for cooperative mobile robots monitoring multiple targets. The simultaneous movement of robots and targets makes this problem particularly interesting, and our review systematically addresses this cooperative multirobot problem for the first time. We classify and critically discuss the control techniques: cooperative multirobot observation of multiple moving targets, cooperative search, acquisition, and track, cooperative tracking, and multirobot pursuit evasion. We also identify the five major elements that characterize this problem, namely, the coordination method, the environment, the target, the robot and its sensor(s). These elements are used to systematically analyze the control techniques. The majority of the studied work is based on simulation and laboratory studies, which may not accurately reflect real-world operational conditions. Importantly, while our systematic analysis is focused on multitarget observation, our proposed classification is useful also for related multirobot applications.

  2. Peer-to-peer model for the area coverage and cooperative control of mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Tan, Jindong; Xi, Ning

    2004-09-01

    This paper presents a novel model and distributed algorithms for the cooperation and redeployment of mobile sensor networks. A mobile sensor network composes of a collection of wireless connected mobile robots equipped with a variety of sensors. In such a sensor network, each mobile node has sensing, computation, communication, and locomotion capabilities. The locomotion ability enhances the autonomous deployment of the system. The system can be rapidly deployed to hostile environment, inaccessible terrains or disaster relief operations. The mobile sensor network is essentially a cooperative multiple robot system. This paper first presents a peer-to-peer model to define the relationship between neighboring communicating robots. Delaunay Triangulation and Voronoi diagrams are used to define the geometrical relationship between sensor nodes. This distributed model allows formal analysis for the fusion of spatio-temporal sensory information of the network. Based on the distributed model, this paper discusses a fault tolerant algorithm for autonomous self-deployment of the mobile robots. The algorithm considers the environment constraints, the presence of obstacles and the nonholonomic constraints of the robots. The distributed algorithm enables the system to reconfigure itself such that the area covered by the system can be enlarged. Simulation results have shown the effectiveness of the distributed model and deployment algorithms.

  3. Knowledge assistant: A sensor fusion framework for robotic environmental characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feddema, J.T.; Rivera, J.J.; Tucker, S.D.

    1996-12-01

    A prototype sensor fusion framework called the {open_quotes}Knowledge Assistant{close_quotes} has been developed and tested on a gantry robot at Sandia National Laboratories. This Knowledge Assistant guides the robot operator during the planning, execution, and post analysis stages of the characterization process. During the planning stage, the Knowledge Assistant suggests robot paths and speeds based on knowledge of sensors available and their physical characteristics. During execution, the Knowledge Assistant coordinates the collection of data through a data acquisition {open_quotes}specialist.{close_quotes} During execution and post analysis, the Knowledge Assistant sends raw data to other {open_quotes}specialists,{close_quotes} which include statistical pattern recognition software, a neuralmore » network, and model-based search software. After the specialists return their results, the Knowledge Assistant consolidates the information and returns a report to the robot control system where the sensed objects and their attributes (e.g. estimated dimensions, weight, material composition, etc.) are displayed in the world model. This paper highlights the major components of this system.« less

  4. Modeling and controlling a robotic convoy using guidance laws strategies.

    PubMed

    Belkhouche, Fethi; Belkhouche, Boumediene

    2005-08-01

    This paper deals with the problem of modeling and controlling a robotic convoy. Guidance laws techniques are used to provide a mathematical formulation of the problem. The guidance laws used for this purpose are the velocity pursuit, the deviated pursuit, and the proportional navigation. The velocity pursuit equations model the robot's path under various sensors based control laws. A systematic study of the tracking problem based on this technique is undertaken. These guidance laws are applied to derive decentralized control laws for the angular and linear velocities. For the angular velocity, the control law is directly derived from the guidance laws after considering the relative kinematics equations between successive robots. The second control law maintains the distance between successive robots constant by controlling the linear velocity. This control law is derived by considering the kinematics equations between successive robots under the considered guidance law. Properties of the method are discussed and proven. Simulation results confirm the validity of our approach, as well as the validity of the properties of the method. Index Terms-Guidance laws, relative kinematics equations, robotic convoy, tracking.

  5. General visual robot controller networks via artificial evolution

    NASA Astrophysics Data System (ADS)

    Cliff, David; Harvey, Inman; Husbands, Philip

    1993-08-01

    We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.

  6. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  7. Landmark-based robust navigation for tactical UGV control in GPS-denied communication-degraded environments

    NASA Astrophysics Data System (ADS)

    Endo, Yoichiro; Balloch, Jonathan C.; Grushin, Alexander; Lee, Mun Wai; Handelman, David

    2016-05-01

    Control of current tactical unmanned ground vehicles (UGVs) is typically accomplished through two alternative modes of operation, namely, low-level manual control using joysticks and high-level planning-based autonomous control. Each mode has its own merits as well as inherent mission-critical disadvantages. Low-level joystick control is vulnerable to communication delay and degradation, and high-level navigation often depends on uninterrupted GPS signals and/or energy-emissive (non-stealth) range sensors such as LIDAR for localization and mapping. To address these problems, we have developed a mid-level control technique where the operator semi-autonomously drives the robot relative to visible landmarks that are commonly recognizable by both humans and machines such as closed contours and structured lines. Our novel solution relies solely on optical and non-optical passive sensors and can be operated under GPS-denied, communication-degraded environments. To control the robot using these landmarks, we developed an interactive graphical user interface (GUI) that allows the operator to select landmarks in the robot's view and direct the robot relative to one or more of the landmarks. The integrated UGV control system was evaluated based on its ability to robustly navigate through indoor environments. The system was successfully field tested with QinetiQ North America's TALON UGV and Tactical Robot Controller (TRC), a ruggedized operator control unit (OCU). We found that the proposed system is indeed robust against communication delay and degradation, and provides the operator with steady and reliable control of the UGV in realistic tactical scenarios.

  8. Heuristic control of the Utah/MIT dextrous robot hand

    NASA Technical Reports Server (NTRS)

    Bass, Andrew H., Jr.

    1987-01-01

    Basic hand grips and sensor interactions that a dextrous robot hand will need as part of the operation of an EVA Retriever are analyzed. What is to be done with a dextrous robot hand is examined along with how such a complex machine might be controlled. It was assumed throughout that an anthropomorphic robot hand should perform tasks just as a human would; i.e., the most efficient approach to developing control strategies for the hand would be to model actual hand actions and do the same tasks in the same ways. Therefore, basic hand grips that human hands perform, as well as hand grip action were analyzed. It was also important to examine what is termed sensor fusion. This is the integration of various disparate sensor feedback paths. These feedback paths can be spatially and temporally separated, as well as, of different sensor types. Neural networks are seen as a means of integrating these varied sensor inputs and types. Basic heuristics of hand actions and grips were developed. These heuristics offer promise of control dextrous robot hands in a more natural and efficient way.

  9. Automation and Robotics for Space-Based Systems, 1991

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II (Editor)

    1992-01-01

    The purpose of this in-house workshop was to assess the state-of-the-art of automation and robotics for space operations from an LaRC perspective and to identify areas of opportunity for future research. Over half of the presentations came from the Automation Technology Branch, covering telerobotic control, extravehicular activity (EVA) and intra-vehicular activity (IVA) robotics, hand controllers for teleoperation, sensors, neural networks, and automated structural assembly, all applied to space missions. Other talks covered the Remote Manipulator System (RMS) active damping augmentation, space crane work, modeling, simulation, and control of large, flexible space manipulators, and virtual passive controller designs for space robots.

  10. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    PubMed Central

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  11. An implementation of sensor-based force feedback in a compact laparoscopic surgery robot.

    PubMed

    Lee, Duk-Hee; Choi, Jaesoon; Park, Jun-Woo; Bach, Du-Jin; Song, Seung-Jun; Kim, Yoon-Ho; Jo, Yungho; Sun, Kyung

    2009-01-01

    Despite the rapid progress in the clinical application of laparoscopic surgery robots, many shortcomings have not yet been fully overcome, one of which is the lack of reliable haptic feedback. This study implemented a force-feedback structure in our compact laparoscopic surgery robot. The surgery robot is a master-slave configuration robot with 5 DOF (degree of freedom corresponding laparoscopic surgical motion. The force-feedback implementation was made in the robot with torque sensors and controllers installed in the pitch joint of the master and slave robots. A simple dynamic model of action-reaction force in the slave robot was used, through which the reflective force was estimated and fed back to the master robot. The results showed the system model could be identified with significant fidelity and the force feedback at the master robot was feasible. However, the qualitative human assessment of the fed-back force showed only limited level of object discrimination ability. Further developments are underway with this result as a framework.

  12. Toward controlling perturbations in robotic sensor networks

    NASA Astrophysics Data System (ADS)

    Banerjee, Ashis G.; Majumder, Saikat R.

    2014-06-01

    Robotic sensor networks (RSNs), which consist of networks of sensors placed on mobile robots, are being increasingly used for environment monitoring applications. In particular, a lot of work has been done on simultaneous localization and mapping of the robots, and optimal sensor placement for environment state estimation1. The deployment of RSNs, however, remains challenging in harsh environments where the RSNs have to deal with significant perturbations in the forms of wind gusts, turbulent water flows, sand storms, or blizzards that disrupt inter-robot communication and individual robot stability. Hence, there is a need to be able to control such perturbations and bring the networks to desirable states with stable nodes (robots) and minimal operational performance (environment sensing). Recent work has demonstrated the feasibility of controlling the non-linear dynamics in other communication networks like emergency management systems and power grids by introducing compensatory perturbations to restore network stability and operation2. In this paper, we develop a computational framework to investigate the usefulness of this approach for RSNs in marine environments. Preliminary analysis shows promising performance and identifies bounds on the original perturbations within which it is possible to control the networks.

  13. Method for six-legged robot stepping on obstacles by indirect force estimation

    NASA Astrophysics Data System (ADS)

    Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun

    2016-07-01

    Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.

  14. Controlling the autonomy of a reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David

    2004-09-01

    In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.

  15. A Force-Sensing System on Legs for Biomimetic Hexapod Robots Interacting with Unstructured Terrain

    PubMed Central

    Wu, Rui; Li, Changle; Zang, Xizhe; Zhang, Xuehe; Jin, Hongzhe; Zhao, Jie

    2017-01-01

    The tiger beetle can maintain its stability by controlling the interaction force between its legs and an unstructured terrain while it runs. The biomimetic hexapod robot mimics a tiger beetle, and a comprehensive force sensing system combined with certain algorithms can provide force information that can help the robot understand the unstructured terrain that it interacts with. This study introduces a complicated leg force sensing system for a hexapod robot that is the same for all six legs. First, the layout and configuration of sensing system are designed according to the structure and sizes of legs. Second, the joint toque sensors, 3-DOF foot-end force sensor and force information processing module are designed, and the force sensor performance parameters are tested by simulations and experiments. Moreover, a force sensing system is implemented within the robot control architecture. Finally, the experimental evaluation of the leg force sensor system on the hexapod robot is discussed and the performance of the leg force sensor system is verified. PMID:28654003

  16. A design of endoscopic imaging system for hyper long pipeline based on wheeled pipe robot

    NASA Astrophysics Data System (ADS)

    Zheng, Dongtian; Tan, Haishu; Zhou, Fuqiang

    2017-03-01

    An endoscopic imaging system of hyper long pipeline is designed to acquire the inner surface image in advance for the hyper long pipeline detects measurement. The system consists of structured light sensors, pipe robots and control system. The pipe robot is in the form of wheel structure, with the sensor which is at the front of the vehicle body. The control system is at the tail of the vehicle body in the form of upper and lower computer. The sensor can be translated and scanned in three steps: walking, lifting and scanning, then the inner surface image can be acquired at a plurality of positions and different angles. The results of imaging experiments show that the system's transmission distance is longer, the acquisition angle is more diverse and the result is more comprehensive than the traditional imaging system, which lays an important foundation for later inner surface vision measurement.

  17. Influence of control parameters on the joint tracking performance of a coaxial weld vision system

    NASA Technical Reports Server (NTRS)

    Gangl, K. J.; Weeks, J. L.

    1985-01-01

    The first phase of a series of evaluations of a vision-based welding control sensor for the Space Shuttle Main Engine Robotic Welding System is described. The robotic welding system is presently under development at the Marshall Space Flight Center. This evaluation determines the standard control response parameters necessary for proper trajectory of the welding torch along the joint.

  18. Machine intelligence and autonomy for aerospace systems

    NASA Technical Reports Server (NTRS)

    Heer, Ewald (Editor); Lum, Henry (Editor)

    1988-01-01

    The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.

  19. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    PubMed Central

    Gonzalez-de-Soto, Mariano; Pajares, Gonzalo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976

  20. New trends in robotics for agriculture: integration and assessment of a real fleet of robots.

    PubMed

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  1. Autonomous caregiver following robotic wheelchair

    NASA Astrophysics Data System (ADS)

    Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary

    2011-12-01

    In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.

  2. Compact Tactile Sensors for Robot Fingers

    NASA Technical Reports Server (NTRS)

    Martin, Toby B.; Lussy, David; Gaudiano, Frank; Hulse, Aaron; Diftler, Myron A.; Rodriguez, Dagoberto; Bielski, Paul; Butzer, Melisa

    2004-01-01

    Compact transducer arrays that measure spatial distributions of force or pressure have been demonstrated as prototypes of tactile sensors to be mounted on fingers and palms of dexterous robot hands. The pressure- or force-distribution feedback provided by these sensors is essential for the further development and implementation of robot-control capabilities for humanlike grasping and manipulation.

  3. Object positioning in storages of robotized workcells using LabVIEW Vision

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banaś, W.; Sękala, A.; Gwiazda, A.; Foit, K.; Kost, G.

    2015-11-01

    During the manufacturing process, each performed task is previously developed and adapted to the conditions and the possibilities of the manufacturing plant. The production process is supervised by a team of specialists because any downtime causes great loss of time and hence financial loss. Sensors used in industry for tracking and supervision various stages of a production process make it much easier to maintain it continuous. One of groups of sensors used in industrial applications are non-contact sensors. This group includes: light barriers, optical sensors, rangefinders, vision systems, and ultrasonic sensors. Through to the rapid development of electronics the vision systems were widespread as the most flexible type of non-contact sensors. These systems consist of cameras, devices for data acquisition, devices for data analysis and specialized software. Vision systems work well as sensors that control the production process itself as well as the sensors that control the product quality level. The LabVIEW program as well as the LabVIEW Vision and LabVIEW Builder represent the application that enables program the informatics system intended to process and product quality control. The paper presents elaborated application for positioning elements in a robotized workcell. Basing on geometric parameters of manipulated object or on the basis of previously developed graphical pattern it is possible to determine the position of particular manipulated elements. This application could work in an automatic mode and in real time cooperating with the robot control system. It allows making the workcell functioning more autonomous.

  4. A mobile robots experimental environment with event-based wireless communication.

    PubMed

    Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián

    2013-07-22

    An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented.

  5. Extensibility in local sensor based planning for hyper-redundant manipulators (robot snakes)

    NASA Technical Reports Server (NTRS)

    Choset, Howie; Burdick, Joel

    1994-01-01

    Partial Shape Modification (PSM) is a local sensor feedback method used for hyper-redundant robot manipulators, in which the redundancy is very large or infinite such as that of a robot snake. This aspect of redundancy enables local obstacle avoidance and end-effector placement in real time. Due to the large number of joints or actuators in a hyper-redundant manipulator, small displacement errors of such easily accumulate to large errors in the position of the tip relative to the base. The accuracy could be improved by a local sensor based planning method in which sensors are distributed along the length of the hyper-redundant robot. This paper extends the local sensor based planning strategy beyond the limitations of the fixed length of such a manipulator when its joint limits are met. This is achieved with an algorithm where the length of the deforming part of the robot is variable. Thus , the robot's local avoidance of obstacles is improved through the enhancement of its extensibility.

  6. Precise computer controlled positioning of robot end effectors using force sensors

    NASA Technical Reports Server (NTRS)

    Shieh, L. S.; Mcinnis, B. C.; Wang, J. C.

    1988-01-01

    A thorough study of combined position/force control using sensory feedback for a one-dimensional manipulator model, which may count for the spacecraft docking problem or be extended to the multi-joint robot manipulator problem, was performed. The additional degree of freedom introduced by the compliant force sensor is included in the system dynamics in the design of precise position control. State feedback based on the pole placement method and with integral control is used to design the position controller. A simple constant gain force controller is used as an example to illustrate the dependence of the stability and steady-state accuracy of the overall position/force control upon the design of the inner position controller. Supportive simulation results are also provided.

  7. Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor

    NASA Technical Reports Server (NTRS)

    Prinz, F. B.

    1991-01-01

    Sensor based robot motion planning research has primarily focused on mobile robots. Consider, however, the case of a robot manipulator expected to operate autonomously in a dynamic environment where unexpected collisions can occur with many parts of the robot. Only a sensor based system capable of generating collision free paths would be acceptable in such situations. Recently, work in this area has been reported in which a deterministic solution for 2DOF systems has been generated. The arm was sensitized with 'skin' of infra-red sensors. We have proposed a heuristic (potential field based) methodology for redundant robots with large DOF's. The key concepts are solving the path planning problem by cooperating global and local planning modules, the use of complete information from the sensors and partial (but appropriate) information from a world model, representation of objects with hyper-ellipsoids in the world model, and the use of variational planning. We intend to sensitize the robot arm with a 'skin' of capacitive proximity sensors. These sensors were developed at NASA, and are exceptionally suited for the space application. In the first part of the report, we discuss the development and modeling of the capacitive proximity sensor. In the second part we discuss the motion planning algorithm.

  8. Recent CESAR (Center for Engineering Systems Advanced Research) research activities in sensor based reasoning for autonomous machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; de Saussure, G.; Spelt, P.F.

    1988-01-01

    This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioningmore » of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.« less

  9. Interactive robot control system and method of use

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Platt, Robert (Inventor); Reiland, Matthew J. (Inventor); Linn, Douglas Martin (Inventor)

    2012-01-01

    A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.

  10. Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements

    PubMed Central

    Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962

  11. Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.

    PubMed

    Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.

  12. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots

    PubMed Central

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-01-01

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed. PMID:29186843

  13. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots.

    PubMed

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-11-25

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.

  14. A Concept of the Differentially Driven Three Wheeled Robot

    NASA Astrophysics Data System (ADS)

    Kelemen, M.; Colville, D. J.; Kelemenová, T.; Virgala, I.; Miková, L.

    2013-08-01

    The paper deals with the concept of a differentially driven three wheeled robot. The main task for the robot is to follow the navigation black line on white ground. The robot also contains anti-collision sensors for avoiding obstacles on track. Students learn how to deal with signals from sensors and how to control DC motors. Students work with the controller and develop the locomotion algorithm and can attend a competition

  15. Providing haptic feedback in robot-assisted minimally invasive surgery: a direct optical force-sensing solution for haptic rendering of deformable bodies.

    PubMed

    Ehrampoosh, Shervin; Dave, Mohit; Kia, Michael A; Rablau, Corneliu; Zadeh, Mehrdad H

    2013-01-01

    This paper presents an enhanced haptic-enabled master-slave teleoperation system which can be used to provide force feedback to surgeons in minimally invasive surgery (MIS). One of the research goals was to develop a combined-control architecture framework that included both direct force reflection (DFR) and position-error-based (PEB) control strategies. To achieve this goal, it was essential to measure accurately the direct contact forces between deformable bodies and a robotic tool tip. To measure the forces at a surgical tool tip and enhance the performance of the teleoperation system, an optical force sensor was designed, prototyped, and added to a robot manipulator. The enhanced teleoperation architecture was formulated by developing mathematical models for the optical force sensor, the extended slave robot manipulator, and the combined-control strategy. Human factor studies were also conducted to (a) examine experimentally the performance of the enhanced teleoperation system with the optical force sensor, and (b) study human haptic perception during the identification of remote object deformability. The first experiment was carried out to discriminate deformability of objects when human subjects were in direct contact with deformable objects by means of a laparoscopic tool. The control parameters were then tuned based on the results of this experiment using a gain-scheduling method. The second experiment was conducted to study the effectiveness of the force feedback provided through the enhanced teleoperation system. The results show that the force feedback increased the ability of subjects to correctly identify materials of different deformable types. In addition, the virtual force feedback provided by the teleoperation system comes close to the real force feedback experienced in direct MIS. The experimental results provide design guidelines for choosing and validating the control architecture and the optical force sensor.

  16. A new scheme of force reflecting control

    NASA Technical Reports Server (NTRS)

    Kim, Won S.

    1992-01-01

    A new scheme of force reflecting control has been developed that incorporates position-error-based force reflection and robot compliance control. The operator is provided with a kinesthetic force feedback which is proportional to the position error between the operator-commanded and the actual position of the robot arm. Robot compliance control, which increases the effective compliance of the robot, is implemented by low pass filtering the outputs of the force/torque sensor mounted on the base of robot hand and using these signals to alter the operator's position command. This position-error-based force reflection scheme combined with shared compliance control has been implemented successfully to the Advanced Teleoperation system consisting of dissimilar master-slave arms. Stability measurements have demonstrated unprecedentedly high force reflection gains of up to 2 or 3, even though the slave arm is much stiffer than operator's hand holding the force reflecting hand controller. Peg-in-hole experiments were performed with eight different operating modes to evaluate the new force-reflecting control scheme. Best task performance resulted with this new control scheme.

  17. SVR versus neural-fuzzy network controllers for the sagittal balance of a biped robot.

    PubMed

    Ferreira, João P; Crisóstomo, Manuel M; Coimbra, A Paulo

    2009-12-01

    The real-time balance control of an eight-link biped robot using a zero moment point (ZMP) dynamic model is difficult due to the processing time of the corresponding equations. To overcome this limitation, two alternative intelligent computing control techniques were compared: one based on support vector regression (SVR) and another based on a first-order Takagi-Sugeno-Kang (TSK)-type neural-fuzzy (NF) network. Both methods use the ZMP error and its variation as inputs and the output is the correction of the robot's torso necessary for its sagittal balance. The SVR and the NF were trained based on simulation data and their performance was verified with a real biped robot. Two performance indexes are proposed to evaluate and compare the online performance of the two control methods. The ZMP is calculated by reading four force sensors placed under each robot's foot. The gait implemented in this biped is similar to a human gait that was acquired and adapted to the robot's size. Some experiments are presented and the results show that the implemented gait combined either with the SVR controller or with the TSK NF network controller can be used to control this biped robot. The SVR and the NF controllers exhibit similar stability, but the SVR controller runs about 50 times faster.

  18. An iconic programming language for sensor-based robots

    NASA Technical Reports Server (NTRS)

    Gertz, Matthew; Stewart, David B.; Khosla, Pradeep K.

    1993-01-01

    In this paper we describe an iconic programming language called Onika for sensor-based robotic systems. Onika is both modular and reconfigurable and can be used with any system architecture and real-time operating system. Onika is also a multi-level programming environment wherein tasks are built by connecting a series of icons which, in turn, can be defined in terms of other icons at the lower levels. Expert users are also allowed to use control block form to define servo tasks. The icons in Onika are both shape and color coded, like the pieces of a jigsaw puzzle, thus providing a form of error control in the development of high level applications.

  19. Design and implementation of visual-haptic assistive control system for virtual rehabilitation exercise and teleoperation manipulation.

    PubMed

    Veras, Eduardo J; De Laurentis, Kathryn J; Dubey, Rajiv

    2008-01-01

    This paper describes the design and implementation of a control system that integrates visual and haptic information to give assistive force feedback through a haptic controller (Omni Phantom) to the user. A sensor-based assistive function and velocity scaling program provides force feedback that helps the user complete trajectory following exercises for rehabilitation purposes. This system also incorporates a PUMA robot for teleoperation, which implements a camera and a laser range finder, controlled in real time by a PC, were implemented into the system to help the user to define the intended path to the selected target. The real-time force feedback from the remote robot to the haptic controller is made possible by using effective multithreading programming strategies in the control system design and by novel sensor integration. The sensor-based assistant function concept applied to teleoperation as well as shared control enhances the motion range and manipulation capabilities of the users executing rehabilitation exercises such as trajectory following along a sensor-based defined path. The system is modularly designed to allow for integration of different master devices and sensors. Furthermore, because this real-time system is versatile the haptic component can be used separately from the telerobotic component; in other words, one can use the haptic device for rehabilitation purposes for cases in which assistance is needed to perform tasks (e.g., stroke rehab) and also for teleoperation with force feedback and sensor assistance in either supervisory or automatic modes.

  20. An orbital emulator for pursuit-evasion game theoretic sensor management

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Wang, Tao; Wang, Gang; Jia, Bin; Wang, Zhonghai; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2017-05-01

    This paper develops and evaluates an orbital emulator (OE) for space situational awareness (SSA). The OE can produce 3D satellite movements using capabilities generated from omni-wheeled robot and robotic arm motion methods. The 3D motion of a satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The 3D actions are emulated by omni-wheeled robot models while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. For multiple satellites, a fast map-merging algorithm is integrated into the robot operating system (ROS) and simultaneous localization and mapping (SLAM) routines to locate the multiple robots in the scene. The OE is used to demonstrate a pursuit-evasion (PE) game theoretic sensor management algorithm, which models conflicts between a space-based-visible (SBV) satellite (as pursuer) and a geosynchronous (GEO) satellite (as evader). The cost function of the PE game is based on the informational entropy of the SBV-tracking-GEO scenario. GEO can maneuver using a continuous and low thruster. The hard-in-loop space emulator visually illustrates the SSA problem solution based PE game.

  1. NASA Tech Briefs, October 2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Topics include: Relative-Motion Sensors and Actuators for Two Optical Tables; Improved Position Sensor for Feedback Control of Levitation; Compact Tactile Sensors for Robot Fingers; Improved Ion-Channel Biosensors; Suspended-Patch Antenna With Inverted, EM-Coupled Feed; System Would Predictively Preempt Traffic Lights for Emergency Vehicles; Optical Position Encoders for High or Low Temperatures; Inter-Valence-Subband/Conduction-Band-Transport IR Detectors; Additional Drive Circuitry for Piezoelectric Screw Motors; Software for Use with Optoelectronic Measuring Tool; Coordinating Shared Activities; Software Reduces Radio-Interference Effects in Radar Data; Using Iron to Treat Chlorohydrocarbon-Contaminated Soil; Thermally Insulating, Kinematic Tensioned-Fiber Suspension; Back Actuators for Segmented Mirrors and Other Applications; Mechanism for Self-Reacted Friction Stir Welding; Lightweight Exoskeletons with Controllable Actuators; Miniature Robotic Submarine for Exploring Harsh Environments; Electron-Spin Filters Based on the Rashba Effect; Diffusion-Cooled Tantalum Hot-Electron Bolometer Mixers; Tunable Optical True-Time Delay Devices Would Exploit EIT; Fast Query-Optimized Kernel-Machine Classification; Indentured Parts List Maintenance and Part Assembly Capture Tool - IMPACT; An Architecture for Controlling Multiple Robots; Progress in Fabrication of Rocket Combustion Chambers by VPS; CHEM-Based Self-Deploying Spacecraft Radar Antennas; Scalable Multiprocessor for High-Speed Computing in Space; and Simple Systems for Detecting Spacecraft Meteoroid Punctures.

  2. Control Program for an Optical-Calibration Robot

    NASA Technical Reports Server (NTRS)

    Johnston, Albert

    2005-01-01

    A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.

  3. Development of microsized slip sensors using dielectric elastomer for incipient slippage

    NASA Astrophysics Data System (ADS)

    Hwang, Do-Yeon; Kim, Baek-chul; Cho, Han-Jeong; Li, Zhengyuan; Lee, Youngkwan; Nam, Jae-Do; Moon, Hyungpil; Choi, Hyouk Ryeol; Koo, J. C.

    2014-04-01

    A humanoid robot hand has received significant attention in various fields of study. In terms of dexterous robot hand, slip detecting tactile sensor is essential to grasping objects safely. Moreover, slip sensor is useful in robotics and prosthetics to improve precise control during manipulation tasks. In this paper, sensor based-human biomimetic structure is fabricated. We reported a resistance tactile sensor that enables to detect a slip on the surface of sensor structure. The resistance slip sensor that the novel developed uses acrylonitrile-butadiene rubber (NBR) as a dielectric substrate and carbon particle as an electrode material. The presented sensor device in this paper has fingerprint-like structures that are similar with the role of the human's finger print. It is possible to measure the slip as the structure of sensor makes a deformation and it changes the resistance through forming a new conductive route. To verify effectiveness of the proposed slip detection, experiment using prototype of resistance slip sensor is conducted with an algorithm to detect slip and slip was successfully detected. In this paper, we will discuss the slip detection properties so four sensor and detection principle.

  4. Stochastic control approaches for sensor management in search and exploitation

    NASA Astrophysics Data System (ADS)

    Hitchings, Darin Chester

    Recent improvements in the capabilities of autonomous vehicles have motivated their increased use in such applications as defense, homeland security, environmental monitoring, and surveillance. To enhance performance in these applications, new algorithms are required to control teams of robots autonomously and through limited interactions with human operators. In this dissertation we develop new algorithms for control of robots performing information-seeking missions in unknown environments. These missions require robots to control their sensors in order to discover the presence of objects, keep track of the objects, and learn what these objects are, given a fixed sensing budget. Initially, we investigate control of multiple sensors, with a finite set of sensing options and finite-valued measurements, to locate and classify objects given a limited resource budget. The control problem is formulated as a Partially Observed Markov Decision Problem (POMDP), but its exact solution requires excessive computation. Under the assumption that sensor error statistics are independent and time-invariant, we develop a class of algorithms using Lagrangian Relaxation techniques to obtain optimal mixed strategies using performance bounds developed in previous research. We investigate alternative Receding Horizon (RH) controllers to convert the mixed strategies to feasible adaptive-sensing strategies and evaluate the relative performance of these controllers in simulation. The resulting controllers provide superior performance to alternative algorithms proposed in the literature and obtain solutions to large-scale POMDP problems several orders of magnitude faster than optimal Dynamic Programming (DP) approaches with comparable performance quality. We extend our results for finite action, finite measurement sensor control to scenarios with moving objects. We use Hidden Markov Models (HMMs) for the evolution of objects, according to the dynamics of a birth-death process. We develop a new lower bound on the performance of adaptive controllers in these scenarios, develop algorithms for computing solutions to this lower bound, and use these algorithms as part of a RH controller for sensor allocation in the presence of moving objects We also consider an adaptive Search problem where sensing actions are continuous and the underlying measurement space is also continuous. We extend our previous hierarchical decomposition approach based on performance bounds to this problem and develop novel implementations of Stochastic Dynamic Programming (SDP) techniques to solve this problem. Our algorithms are nearly two orders of magnitude faster than previously proposed approaches and yield solutions of comparable quality. For supervisory control, we discuss how human operators can work with and augment robotic teams performing these tasks. Our focus is on how tasks are partitioned among teams of robots and how a human operator can make intelligent decisions for task partitioning. We explore these questions through the design of a game that involves robot automata controlled by our algorithms and a human supervisor that partitions tasks based on different levels of support information. This game can be used with human subject experiments to explore the effect of information on quality of supervisory control.

  5. A Mobile Robots Experimental Environment with Event-Based Wireless Communication

    PubMed Central

    Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián

    2013-01-01

    An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented. PMID:23881139

  6. Planar maneuvering control of underwater snake robots using virtual holonomic constraints.

    PubMed

    Kohl, Anna M; Kelasidi, Eleni; Mohammadi, Alireza; Maggiore, Manfredi; Pettersen, Kristin Y

    2016-11-24

    This paper investigates the problem of planar maneuvering control for bio-inspired underwater snake robots that are exposed to unknown ocean currents. The control objective is to make a neutrally buoyant snake robot which is subject to hydrodynamic forces and ocean currents converge to a desired planar path and traverse the path with a desired velocity. The proposed feedback control strategy enforces virtual constraints which encode biologically inspired gaits on the snake robot configuration. The virtual constraints, parametrized by states of dynamic compensators, are used to regulate the orientation and forward speed of the snake robot. A two-state ocean current observer based on relative velocity sensors is proposed. It enables the robot to follow the path in the presence of unknown constant ocean currents. The efficacy of the proposed control algorithm for several biologically inspired gaits is verified both in simulations for different path geometries and in experiments.

  7. Can a Soft Robotic Probe Use Stiffness Control Like a Human Finger to Improve Efficacy of Haptic Perception?

    PubMed

    Sornkarn, Nantachai; Nanayakkara, Thrishantha

    2017-01-01

    When humans are asked to palpate a soft tissue to locate a hard nodule, they regulate the stiffness, speed, and force of the finger during examination. If we understand the relationship between these behavioral variables and haptic information gain (transfer entropy) during manual probing, we can improve the efficacy of soft robotic probes for soft tissue palpation, such as in tumor localization in minimally invasive surgery. Here, we recorded the muscle co-contraction activity of the finger using EMG sensors to address the question as to whether joint stiffness control during manual palpation plays an important role in the haptic information gain. To address this question, we used a soft robotic probe with a controllable stiffness joint and a force sensor mounted at the base to represent the function of the tendon in a biological finger. Then, we trained a Markov chain using muscle co-contraction patterns of human subjects, and used it to control the stiffness of the soft robotic probe in the same soft tissue palpation task. The soft robotic experiments showed that haptic information gain about the depth of the hard nodule can be maximized by varying the internal stiffness of the soft probe.

  8. Multi-Axis Force Sensor for Human-Robot Interaction Sensing in a Rehabilitation Robotic Device.

    PubMed

    Grosu, Victor; Grosu, Svetlana; Vanderborght, Bram; Lefeber, Dirk; Rodriguez-Guerrero, Carlos

    2017-06-05

    Human-robot interaction sensing is a compulsory feature in modern robotic systems where direct contact or close collaboration is desired. Rehabilitation and assistive robotics are fields where interaction forces are required for both safety and increased control performance of the device with a more comfortable experience for the user. In order to provide an efficient interaction feedback between the user and rehabilitation device, high performance sensing units are demanded. This work introduces a novel design of a multi-axis force sensor dedicated for measuring pelvis interaction forces in a rehabilitation exoskeleton device. The sensor is conceived such that it has different sensitivity characteristics for the three axes of interest having also movable parts in order to allow free rotations and limit crosstalk errors. Integrated sensor electronics make it easy to acquire and process data for a real-time distributed system architecture. Two of the developed sensors are integrated and tested in a complex gait rehabilitation device for safe and compliant control.

  9. Task directed sensing

    NASA Technical Reports Server (NTRS)

    Firby, R. James

    1990-01-01

    High-level robot control research must confront the limitations imposed by real sensors if robots are to be controlled effectively in the real world. In particular, sensor limitations make it impossible to maintain a complete, detailed world model of the situation surrounding the robot. To address the problems involved in planning with the resulting incomplete and uncertain world models, traditional robot control architectures must be altered significantly. Task-directed sensing and control is suggested as a way of coping with world model limitations by focusing sensing and analysis resources on only those parts of the world relevant to the robot's active goals. The RAP adaptive execution system is used as an example of a control architecture designed to deploy sensing resources in this way to accomplish both action and knowledge goals.

  10. Knowledge assistant for robotic environmental characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feddema, J.; Rivera, J.; Tucker, S.

    1996-08-01

    A prototype sensor fusion framework called the {open_quotes}Knowledge Assistant{close_quotes} has been developed and tested on a gantry robot at Sandia National Laboratories. This Knowledge Assistant guides the robot operator during the planning, execution, and post analysis stages of the characterization process. During the planning stage, the Knowledge Assistant suggests robot paths and speeds based on knowledge of sensors available and their physical characteristics. During execution, the Knowledge Assistant coordinates the collection of data through a data acquisition {open_quotes}specialist.{close_quotes} During execution and postanalysis, the Knowledge Assistant sends raw data to other {open_quotes}specialists,{close_quotes} which include statistical pattern recognition software, a neural network,more » and model-based search software. After the specialists return their results, the Knowledge Assistant consolidates the information and returns a report to the robot control system where the sensed objects and their attributes (e.g., estimated dimensions, weight, material composition, etc.) are displayed in the world model. This report highlights the major components of this system.« less

  11. Infrared-Proximity-Sensor Modules For Robot

    NASA Technical Reports Server (NTRS)

    Parton, William; Wegerif, Daniel; Rosinski, Douglas

    1995-01-01

    Collision-avoidance system for articulated robot manipulators uses infrared proximity sensors grouped together in array of sensor modules. Sensor modules, called "sensorCells," distributed processing board-level products for acquiring data from proximity-sensors strategically mounted on robot manipulators. Each sensorCell self-contained and consists of multiple sensing elements, discrete electronics, microcontroller and communications components. Modules connected to central control computer by redundant serial digital communication subsystem including both serial and a multi-drop bus. Detects objects made of various materials at distance of up to 50 cm. For some materials, such as thermal protection system tiles, detection range reduced to approximately 20 cm.

  12. Omni-Directional Scanning Localization Method of a Mobile Robot Based on Ultrasonic Sensors.

    PubMed

    Mu, Wei-Yi; Zhang, Guang-Peng; Huang, Yu-Mei; Yang, Xin-Gang; Liu, Hong-Yan; Yan, Wen

    2016-12-20

    Improved ranging accuracy is obtained by the development of a novel ultrasonic sensor ranging algorithm, unlike the conventional ranging algorithm, which considers the divergence angle and the incidence angle of the ultrasonic sensor synchronously. An ultrasonic sensor scanning method is developed based on this algorithm for the recognition of an inclined plate and to obtain the localization of the ultrasonic sensor relative to the inclined plate reference frame. The ultrasonic sensor scanning method is then leveraged for the omni-directional localization of a mobile robot, where the ultrasonic sensors are installed on a mobile robot and follow the spin of the robot, the inclined plate is recognized and the position and posture of the robot are acquired with respect to the coordinate system of the inclined plate, realizing the localization of the robot. Finally, the localization method is implemented into an omni-directional scanning localization experiment with the independently researched and developed mobile robot. Localization accuracies of up to ±3.33 mm for the front, up to ±6.21 for the lateral and up to ±0.20° for the posture are obtained, verifying the correctness and effectiveness of the proposed localization method.

  13. Principles of control for robotic excavation

    NASA Astrophysics Data System (ADS)

    Bernold, Leonhard E.

    The issues of automatic planning and control systems for robotic excavation are addressed. Attention is given to an approach to understanding the principles of path and motion control which is based on scaled modeling and experimentation with different soil types and soil conditions. Control concepts for the independent control of a bucket are discussed, and ways in which force sensors could provide the necessary data are demonstrated. Results of experiments with lunar simulant showed that explosive loosening has a substantial impact on the energy needed during excavation. It is argued that through further laboratory and field research, 'pattern languages' for different excavators and soil conditions could be established and employed for robotic excavation.

  14. Localization of Mobile Robots Using Odometry and an External Vision Sensor

    PubMed Central

    Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina

    2010-01-01

    This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318

  15. Localization of mobile robots using odometry and an external vision sensor.

    PubMed

    Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina

    2010-01-01

    This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.

  16. Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network

    DTIC Science & Technology

    2015-08-14

    theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The...independently evolving research directions based on physics-based models of mechanical, electromechanical and electronic devices, operational constraints

  17. Force/torque and tactile sensors for sensor-based manipulator control

    NASA Technical Reports Server (NTRS)

    Vanbrussel, H.; Belieen, H.; Bao, Chao-Ying

    1989-01-01

    The autonomy of manipulators, in space and in industrial environments, can be dramatically enhanced by the use of force/torque and tactile sensors. The development and future use of a six-component force/torque sensor for the Hermes Robot Arm (HERA) Basic End-Effector (BEE) is discussed. Then a multifunctional gripper system based on tactile sensors is described. The basic transducing element of the sensor is a sheet of pressure-sensitive polymer. Tactile image processing algorithms for slip detection, object position estimation, and object recognition are described.

  18. Thermal Image Sensing Model for Robotic Planning and Search.

    PubMed

    Castro Jiménez, Lídice E; Martínez-García, Edgar A

    2016-08-08

    This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image's intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot's course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach.

  19. Multisensory architectures for action-oriented perception

    NASA Astrophysics Data System (ADS)

    Alba, L.; Arena, P.; De Fiore, S.; Listán, J.; Patané, L.; Salem, A.; Scordino, G.; Webb, B.

    2007-05-01

    In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for action-oriented perception applied to a legged robot is presented. An important problem we address is how to utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load, distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition and processing. This choice was made because FPGAs permit the implementation of customized digital logic blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the multi-sensory architecture proposed can achieve real time capabilities.

  20. Integrated High-Speed Torque Control System for a Robotic Joint

    NASA Technical Reports Server (NTRS)

    Davis, Donald R. (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Valvo, Michael C. (Inventor); Askew, R. Scott (Inventor)

    2013-01-01

    A control system for achieving high-speed torque for a joint of a robot includes a printed circuit board assembly (PCBA) having a collocated joint processor and high-speed communication bus. The PCBA may also include a power inverter module (PIM) and local sensor conditioning electronics (SCE) for processing sensor data from one or more motor position sensors. Torque control of a motor of the joint is provided via the PCBA as a high-speed torque loop. Each joint processor may be embedded within or collocated with the robotic joint being controlled. Collocation of the joint processor, PIM, and high-speed bus may increase noise immunity of the control system, and the localized processing of sensor data from the joint motor at the joint level may minimize bus cabling to and from each control node. The joint processor may include a field programmable gate array (FPGA).

  1. Method for Reading Sensors and Controlling Actuators Using Audio Interfaces of Mobile Devices

    PubMed Central

    Aroca, Rafael V.; Burlamaqui, Aquiles F.; Gonçalves, Luiz M. G.

    2012-01-01

    This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks. PMID:22438726

  2. Method for reading sensors and controlling actuators using audio interfaces of mobile devices.

    PubMed

    Aroca, Rafael V; Burlamaqui, Aquiles F; Gonçalves, Luiz M G

    2012-01-01

    This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks.

  3. Explorer-II: Wireless Self-Powered Visual and NDE Robotic Inspection System for Live Gas Distribution Mains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carnegie Mellon University

    2008-09-30

    Carnegie Mellon University (CMU) under contract from Department of Energy/National Energy Technology Laboratory (DoE/NETL) and co-funding from the Northeast Gas Association (NGA), has completed the overall system design, field-trial and Magnetic Flux Leakage (MFL) sensor evaluation program for the next-generation Explorer-II (X-II) live gas main Non-destructive Evaluation (NDE) and visual inspection robot platform. The design is based on the Explorer-I prototype which was built and field-tested under a prior (also DoE- and NGA co-funded) program, and served as the validation that self-powered robots under wireless control could access and navigate live natural gas distribution mains. The X-II system design ({approx}8more » ft. and 66 lbs.) was heavily based on the X-I design, yet was substantially expanded to allow the addition of NDE sensor systems (while retaining its visual inspection capability), making it a modular system, and expanding its ability to operate at pressures up to 750 psig (high-pressure and unpiggable steel-pipe distribution mains). A new electronics architecture and on-board software kernel were added to again improve system performance. A locating sonde system was integrated to allow for absolute position-referencing during inspection (coupled with external differential GPS) and emergency-locating. The power system was upgraded to utilize lithium-based battery-cells for an increase in mission-time. The resulting robot-train system with CAD renderings of the individual modules. The system architecture now relies on a dual set of end camera-modules to house the 32-bit processors (Single-Board Computer or SBC) as well as the imaging and wireless (off-board) and CAN-based (on-board) communication hardware and software systems (as well as the sonde-coil and -electronics). The drive-module (2 ea.) are still responsible for bracing (and centering) to drive in push/pull fashion the robot train into and through the pipes and obstacles. The steering modules and their arrangement, still allow the robot to configure itself to perform any-angle (up to 90 deg) turns in any orientation (incl. vertical), and enable the live launching and recovery of the system using custom fittings and a (to be developed) launch-chamber/-tube. The battery modules are used to power the system, by providing power to the robot's bus. The support modules perform the functions of centration for the rest of the train as well as odometry pickups using incremental encoding schemes. The electronics architecture is based on a distributed (8-bit) microprocessor architecture (at least 1 in ea. module) communicating to a (one of two) 32-bit SBC, which manages all video-processing, posture and motion control as well as CAN and wireless communications. The operator controls the entire system from an off-board (laptop) controller, which is in constant wireless communication with the robot train in the pipe. The sensor modules collect data and forward it to the robot operator computer (via the CAN-wireless communications chain), who then transfers it to a dedicated NDE data-storage and post-processing computer for further (real-time or off-line) analysis. The prototype robot system was built and tested indoors and outdoors, outfitted with a Remote-Field Eddy Current (RFEC) sensor integrated as its main NDE sensor modality. An angled launcher, allowing for live launching and retrieval, was also built to suit custom angled launch-fittings from TDW. The prototype vehicle and launcher systems are shown. The complete system, including the in-pipe robot train, launcher, integrated NDE-sensor and real-time video and control console and NDE-data collection and -processing and real-time display, were demonstrated to all sponsors prior to proceeding into final field-trials--the individual components and setting for said acceptance demonstration are shown. The launcher-tube was also used to verify that the vehicle system is capable of operating in high-pressure environments, and is safely deployable using proper evacuating/purging techniques for operation in the potentially explosive natural gas environment. The test-setting and environment for safety-certification of the X-II robot platform and the launch and recovery procedures, is shown. Field-trials were successfully carried out in a live steel pipeline in Northwestern Pennsylvania. The robot was launched and recovered multiple times, travelling thousands of feet and communicating in real time with video and command-and-control (C&C) data under remote operator control from a laptop, with NDE sensor-data streaming to a second computer for storage, display and post-processing. Representative images of the activities and systems used in the week-long field-trial are shown. CMU also evaluated the ability of the X-II design to be able to integrate an MFL sensor, by adding additional drive-/battery-/steering- and support-modules to extend the X-II train.« less

  4. Petri net controllers for distributed robotic systems

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, George N.

    1992-01-01

    Petri nets are a well established modelling technique for analyzing parallel systems. When coupled with an event-driven operating system, Petri nets can provide an effective means for integrating and controlling the functions of distributed robotic applications. Recent work has shown that Petri net graphs can also serve as remarkably intuitive operator interfaces. In this paper, the advantages of using Petri nets as high-level controllers to coordinate robotic functions are outlined, the considerations for designing Petri net controllers are discussed, and simple Petri net structures for implementing an interface for operator supervision are presented. A detailed example is presented which illustrates these concepts for a sensor-based assembly application.

  5. System Wide Joint Position Sensor Fault Tolerance in Robot Systems Using Cartesian Accelerometers

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    Joint position sensors are necessary for most robot control systems. A single position sensor failure in a normal robot system can greatly degrade performance. This paper presents a method to obtain position information from Cartesian accelerometers without integration. Depending on the number and location of the accelerometers. the proposed system can tolerate the loss of multiple position sensors. A solution technique suitable for real-time implementation is presented. Simulations were conducted using 5 triaxial accelerometers to recover from the loss of up to 4 joint position sensors on a 7 degree of freedom robot moving in general three dimensional space. The simulations show good estimation performance using non-ideal accelerometer measurements.

  6. Research and development of service robot platform based on artificial psychology

    NASA Astrophysics Data System (ADS)

    Zhang, Xueyuan; Wang, Zhiliang; Wang, Fenhua; Nagai, Masatake

    2007-12-01

    Some related works about the control architecture of robot system are briefly summarized. According to the discussions above, this paper proposes control architecture of service robot based on artificial psychology. In this control architecture, the robot can obtain the cognition of environment through sensors, and then be handled with intelligent model, affective model and learning model, and finally express the reaction to the outside stimulation through its behavior. For better understanding the architecture, hierarchical structure is also discussed. The control system of robot can be divided into five layers, namely physical layer, drives layer, information-processing and behavior-programming layer, application layer and system inspection and control layer. This paper shows how to achieve system integration from hardware modules, software interface and fault diagnosis. Embedded system GENE-8310 is selected as the PC platform of robot APROS-I, and its primary memory media is CF card. The arms and body of the robot are constituted by 13 motors and some connecting fittings. Besides, the robot has a robot head with emotional facial expression, and the head has 13 DOFs. The emotional and intelligent model is one of the most important parts in human-machine interaction. In order to better simulate human emotion, an emotional interaction model for robot is proposed according to the theory of need levels of Maslom and mood information of Siminov. This architecture has already been used in our intelligent service robot.

  7. Minimalistic optic flow sensors applied to indoor and outdoor visual guidance and odometry on a car-like robot.

    PubMed

    Mafrica, Stefano; Servel, Alain; Ruffier, Franck

    2016-11-10

    Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual  guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M 2 APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor.

  8. Multi-arm multilateral haptics-based immersive tele-robotic system (HITS) for improvised explosive device disposal

    NASA Astrophysics Data System (ADS)

    Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir

    2014-06-01

    This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.

  9. The magic glove: a gesture-based remote controller for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark

    2012-01-01

    This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.

  10. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  11. Multi-layer robot skin with embedded sensors and muscles

    NASA Astrophysics Data System (ADS)

    Tomar, Ankit; Tadesse, Yonas

    2016-04-01

    Soft artificial skin with embedded sensors and actuators is proposed for a crosscutting study of cognitive science on a facial expressive humanoid platform. This paper focuses on artificial muscles suitable for humanoid robots and prosthetic devices for safe human-robot interactions. Novel composite artificial skin consisting of sensors and twisted polymer actuators is proposed. The artificial skin is conformable to intricate geometries and includes protective layers, sensor layers, and actuation layers. Fluidic channels are included in the elastomeric skin to inject fluids in order to control actuator response time. The skin can be used to develop facially expressive humanoid robots or other soft robots. The humanoid robot can be used by computer scientists and other behavioral science personnel to test various algorithms, and to understand and develop more perfect humanoid robots with facial expression capability. The small-scale humanoid robots can also assist ongoing therapeutic treatment research with autistic children. The multilayer skin can be used for many soft robots enabling them to detect both temperature and pressure, while actuating the entire structure.

  12. Robotic Transnasal Endoscopic Skull Base Surgery: Systematic Review of the Literature and Report of a Novel Prototype for a Hybrid System (Brescia Endoscope Assistant Robotic Holder).

    PubMed

    Bolzoni Villaret, Andrea; Doglietto, Francesco; Carobbio, Andrea; Schreiber, Alberto; Panni, Camilla; Piantoni, Enrico; Guida, Giovanni; Fontanella, Marco Maria; Nicolai, Piero; Cassinis, Riccardo

    2017-09-01

    Although robotics has already been applied to several surgical fields, available systems are not designed for endoscopic skull base surgery (ESBS). New conception prototypes have been recently described for ESBS. The aim of this study was to provide a systematic literature review of robotics for ESBS and describe a novel prototype developed at the University of Brescia. PubMed and Scopus databases were searched using a combination of terms, including Robotics OR Robot and Surgery OR Otolaryngology OR Skull Base OR Holder. The retrieved papers were analyzed, recording the following features: interface, tools under robotic control, force feedback, safety systems, setup time, and operative time. A novel hybrid robotic system has been developed and tested in a preclinical setting at the University of Brescia, using an industrial manipulator and readily available off-the-shelf components. A total of 11 robotic prototypes for ESBS were identified. Almost all prototypes present a difficult emergency management as one of the main limits. The Brescia Endoscope Assistant Robotic holder has proven the feasibility of an intuitive robotic movement, using the surgeon's head position: a 6 degree of freedom sensor was used and 2 light sources were added to glasses that were therefore recognized by a commercially available sensor. Robotic system prototypes designed for ESBS and reported in the literature still present significant technical limitations. Hybrid robot assistance has a huge potential and might soon be feasible in ESBS. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. ARK: Autonomous mobile robot in an industrial environment

    NASA Technical Reports Server (NTRS)

    Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.

    1994-01-01

    This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.

  14. Reflexive obstacle avoidance for kinematically-redundant manipulators

    NASA Technical Reports Server (NTRS)

    Karlen, James P.; Thompson, Jack M., Jr.; Farrell, James D.; Vold, Havard I.

    1989-01-01

    Dexterous telerobots incorporating 17 or more degrees of freedom operating under coordinated, sensor-driven computer control will play important roles in future space operations. They will also be used on Earth in assignments like fire fighting, construction and battlefield support. A real time, reflexive obstacle avoidance system, seen as a functional requirement for such massively redundant manipulators, was developed using arm-mounted proximity sensors to control manipulator pose. The project involved a review and analysis of alternative proximity sensor technologies for space applications, the development of a general-purpose algorithm for synthesizing sensor inputs, and the implementation of a prototypical system for demonstration and testing. A 7 degree of freedom Robotics Research K-2107HR manipulator was outfitted with ultrasonic proximity sensors as a testbed, and Robotics Research's standard redundant motion control algorithm was modified such that an object detected by sensor arrays located at the elbow effectively applies a force to the manipulator elbow, normal to the axis. The arm is repelled by objects detected by the sensors, causing the robot to steer around objects in the workspace automatically while continuing to move its tool along the commanded path without interruption. The mathematical approach formulated for synthesizing sensor inputs can be employed for redundant robots of any kinematic configuration.

  15. Using arm and hand gestures to command robots during stealth operations

    NASA Astrophysics Data System (ADS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-06-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-offreedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  16. Using Arm and Hand Gestures to Command Robots during Stealth Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-01-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  17. Tactile Robotic Topographical Mapping Without Force or Contact Sensors

    NASA Technical Reports Server (NTRS)

    Burke, Kevin; Melko, Joseph; Krajewski, Joel; Cady, Ian

    2008-01-01

    A method of topographical mapping of a local solid surface within the range of motion of a robot arm is based on detection of contact between the surface and the end effector (the fixture or tool at the tip of the robot arm). The method was conceived to enable mapping of local terrain by an exploratory robot on a remote planet, without need to incorporate delicate contact switches, force sensors, a vision system, or other additional, costly hardware. The method could also be used on Earth for determining the size and shape of an unknown surface in the vicinity of a robot, perhaps in an unanticipated situation in which other means of mapping (e.g., stereoscopic imaging or laser scanning with triangulation) are not available. The method uses control software modified to utilize the inherent capability of the robotic control system to measure the joint positions, the rates of change of the joint positions, and the electrical current demanded by the robotic arm joint actuators. The system utilizes these coordinate data and the known robot-arm kinematics to compute the position and velocity of the end effector, move the end effector along a specified trajectory, place the end effector at a specified location, and measure the electrical currents in the joint actuators. Since the joint actuator current is approximately proportional to the actuator forces and torques, a sudden rise in joint current, combined with a slowing of the joint, is a possible indication of actuator stall and surface contact. Hence, even though the robotic arm is not equipped with contact sensors, it is possible to sense contact (albeit with reduced sensitivity) as the end effector becomes stalled against a surface that one seeks to measure.

  18. An EMG-based robot control scheme robust to time-varying EMG signal features.

    PubMed

    Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J

    2010-05-01

    Human-robot control interfaces have received increased attention during the past decades. With the introduction of robots in everyday life, especially in providing services to people with special needs (i.e., elderly, people with impairments, or people with disabilities), there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from muscles of the human upper limb are used as the control interface between the user and a robot arm. EMG signals are recorded using surface EMG electrodes placed on the user's skin, making the user's upper limb free of bulky interface sensors or machinery usually found in conventional human-controlled systems. The proposed interface allows the user to control in real time an anthropomorphic robot arm in 3-D space, using upper limb motion estimates based only on EMG recordings. Moreover, the proposed interface is robust to EMG changes with respect to time, mainly caused by muscle fatigue or adjustments of contraction level. The efficiency of the method is assessed through real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles.

  19. [Exoskeleton robot system based on real-time gait analysis for walking assist].

    PubMed

    Xie, Zheng; Wang, Mingjiang; Huang, Wulong; Yong, Shanshan; Wang, Xin'an

    2017-04-01

    This paper presents a wearable exoskeleton robot system to realize walking assist function, which oriented toward the patients or the elderly with the mild impairment of leg movement function, due to illness or natural aging. It reduces the loads of hip, knee, ankle and leg muscles during walking by way of weight support. In consideration of the characteristics of the psychological demands and the disease, unlike the weight loss system in the fixed or followed rehabilitation robot, the structure of the proposed exoskeleton robot is artistic, lightweight and portable. The exoskeleton system analyzes the user's gait real-timely by the plantar pressure sensors to divide gait phases, and present different control strategies for each gait phase. The pressure sensors in the seat of the exoskeleton system provide real-time monitoring of the support efforts. And the drive control uses proportion-integral-derivative (PID) control technology for torque control. The total weight of the robot system is about 12.5 kg. The average of the auxiliary support is about 10 kg during standing, and it is about 3 kg during walking. The system showed, in the experiments, a certain effect of weight support, and reduction of the pressure on the lower limbs to walk and stand.

  20. Mobile robotic sensors for perimeter detection and tracking.

    PubMed

    Clark, Justin; Fierro, Rafael

    2007-02-01

    Mobile robot/sensor networks have emerged as tools for environmental monitoring, search and rescue, exploration and mapping, evaluation of civil infrastructure, and military operations. These networks consist of many sensors each equipped with embedded processors, wireless communication, and motion capabilities. This paper describes a cooperative mobile robot network capable of detecting and tracking a perimeter defined by a certain substance (e.g., a chemical spill) in the environment. Specifically, the contributions of this paper are twofold: (i) a library of simple reactive motion control algorithms and (ii) a coordination mechanism for effectively carrying out perimeter-sensing missions. The decentralized nature of the methodology implemented could potentially allow the network to scale to many sensors and to reconfigure when adding/deleting sensors. Extensive simulation results and experiments verify the validity of the proposed cooperative control scheme.

  1. Multiple sensor smart robot hand with force control

    NASA Technical Reports Server (NTRS)

    Killion, Richard R.; Robinson, Lee R.; Bejczy, Antal

    1987-01-01

    A smart robot hand developed at JPL for the Protoflight Manipulator Arm (PFMA) is described. The development of this smart hand was based on an integrated design and subsystem architecture by considering mechanism, electronics, sensing, control, display, and operator interface in an integrated design approach. The mechanical details of this smart hand and the overall subsystem are described elsewhere. The sensing and electronics components of the JPL/PFMA smart hand are summarized and it is described in some detail in control capabilities.

  2. Inertial sensor self-calibration in a visually-aided navigation approach for a micro-AUV.

    PubMed

    Bonin-Font, Francisco; Massot-Campos, Miquel; Negre-Carrasco, Pep Lluis; Oliver-Codina, Gabriel; Beltran, Joan P

    2015-01-16

    This paper presents a new solution for underwater observation, image recording, mapping and 3D reconstruction in shallow waters. The platform, designed as a research and testing tool, is based on a small underwater robot equipped with a MEMS-based IMU, two stereo cameras and a pressure sensor. The data given by the sensors are fused, adjusted and corrected in a multiplicative error state Kalman filter (MESKF), which returns a single vector with the pose and twist of the vehicle and the biases of the inertial sensors (the accelerometer and the gyroscope). The inclusion of these biases in the state vector permits their self-calibration and stabilization, improving the estimates of the robot orientation. Experiments in controlled underwater scenarios and in the sea have demonstrated a satisfactory performance and the capacity of the vehicle to operate in real environments and in real time.

  3. Inertial Sensor Self-Calibration in a Visually-Aided Navigation Approach for a Micro-AUV

    PubMed Central

    Bonin-Font, Francisco; Massot-Campos, Miquel; Negre-Carrasco, Pep Lluis; Oliver-Codina, Gabriel; Beltran, Joan P.

    2015-01-01

    This paper presents a new solution for underwater observation, image recording, mapping and 3D reconstruction in shallow waters. The platform, designed as a research and testing tool, is based on a small underwater robot equipped with a MEMS-based IMU, two stereo cameras and a pressure sensor. The data given by the sensors are fused, adjusted and corrected in a multiplicative error state Kalman filter (MESKF), which returns a single vector with the pose and twist of the vehicle and the biases of the inertial sensors (the accelerometer and the gyroscope). The inclusion of these biases in the state vector permits their self-calibration and stabilization, improving the estimates of the robot orientation. Experiments in controlled underwater scenarios and in the sea have demonstrated a satisfactory performance and the capacity of the vehicle to operate in real environments and in real time. PMID:25602263

  4. On the Fracture Toughness and Crack Growth Resistance of Bio-Inspired Thermal Spray Hybrid Composites

    NASA Astrophysics Data System (ADS)

    Resnick, Michael Murray

    Surface exploration of the Moon and Asteroids can provide important information to scientists regarding the origins of the solar-system and life . Small robots and sensor modules can enable low-cost surface exploration. In the near future, they are the main machines providing these answers. Advanced in electronics, sensors and actuators enable ever smaller platforms, with compromising functionality. However similar advances haven't taken place for power supplies and thermal control system. The lunar south pole has temperatures in the range of -100 to -150 °C. Similarly, asteroid surfaces can encounter temperatures of -150 °C. Most electronics and batteries do not work below -40 °C. An effective thermal control system is critical towards making small robots and sensors module for extreme environments feasible. In this work, the feasibility of using thermochemical storage materials as a possible thermal control solution is analyzed for small robots and sensor modules for lunar and asteroid surface environments. The presented technology will focus on using resources that is readily generated as waste product aboard a spacecraft or is available off-world through In-Situ Resource Utilization (ISRU). In this work, a sensor module for extreme environment has been designed and prototyped. Our intention is to have a network of tens or hundreds of sensor modules that can communicate and interact with each other while also gathering science data. The design contains environmental sensors like temperature sensors and IMU (containing accelerometer, gyro and magnetometer) to gather data. The sensor module would nominally contain an electrical heater and insulation. The thermal heating effect provided by this active heater is compared with the proposed technology that utilizes thermochemical storage chemicals. Our results show that a thermochemical storage-based thermal control system is feasible for use in extreme temperatures. A performance increase of 80% is predicted for the sensor modules on the asteroid Eros using thermochemical based storage system. At laboratory level, a performance increase of 8 to 9 % is observed at ambient temperatures of -32°C and -40 °C.

  5. Method for neural network control of motion using real-time environmental feedback

    NASA Technical Reports Server (NTRS)

    Buckley, Theresa M. (Inventor)

    1997-01-01

    A method of motion control for robotics and other automatically controlled machinery using a neural network controller with real-time environmental feedback. The method is illustrated with a two-finger robotic hand having proximity sensors and force sensors that provide environmental feedback signals. The neural network controller is taught to control the robotic hand through training sets using back- propagation methods. The training sets are created by recording the control signals and the feedback signal as the robotic hand or a simulation of the robotic hand is moved through a representative grasping motion. The data recorded is divided into discrete increments of time and the feedback data is shifted out of phase with the control signal data so that the feedback signal data lag one time increment behind the control signal data. The modified data is presented to the neural network controller as a training set. The time lag introduced into the data allows the neural network controller to account for the temporal component of the robotic motion. Thus trained, the neural network controlled robotic hand is able to grasp a wide variety of different objects by generalizing from the training sets.

  6. Eclipse of the Floating Orbs: Controlling Robots on the International Space Station

    NASA Technical Reports Server (NTRS)

    Wheeler, D. W.

    2017-01-01

    I will describe the Control Station for a free-flying robot called Astrobee. Astrobee will serve as a mobile camera, sensor platform, and research testbed when it is launched to the International Space Station (ISS)in 2017. Astronauts on the ISS as well as ground-based users will control Astrobee using the Eclipse-based Astrobee Control Station. Designing theControl Station for use in space presented unique challenges, such as allowing the intuitive input of 3D information without a mouse or trackpad. Come to this talk to learn how Eclipse is used in an environment few humans have the chance to visit.

  7. The Structure, Design, and Closed-Loop Motion Control of a Differential Drive Soft Robot.

    PubMed

    Wu, Pang; Jiangbei, Wang; Yanqiong, Fei

    2018-02-01

    This article presents the structure, design, and motion control of an inchworm inspired pneumatic soft robot, which can perform differential movement. This robot mainly consists of two columns of pneumatic multi-airbags (actuators), one sensor, one baseboard, front feet, and rear feet. According to the different inflation time of left and right actuators, the robot can perform both linear and turning movements. The actuators of this robot are composed of multiple airbags, and the design of the airbags is analyzed. To deal with the nonlinear performance of the soft robot, we use radial basis function neural networks to train the turning ability of this robot on three different surfaces and create a mathematical model among coefficient of friction, deflection angle, and inflation time. Then, we establish the closed-loop automatic control model using three-axis electronic compass sensor. Finally, the automatic control model is verified by linear and turning movement experiments. According to the experiment, the robot can finish the linear and turning movements under the closed-loop control system.

  8. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.

  9. Integration of advanced teleoperation technologies for control of space robots

    NASA Technical Reports Server (NTRS)

    Stagnaro, Michael J.

    1993-01-01

    Teleoperated robots require one or more humans to control actuators, mechanisms, and other robot equipment given feedback from onboard sensors. To accomplish this task, the human or humans require some form of control station. Desirable features of such a control station include operation by a single human, comfort, and natural human interfaces (visual, audio, motion, tactile, etc.). These interfaces should work to maximize performance of the human/robot system by streamlining the link between human brain and robot equipment. This paper describes development of a control station testbed with the characteristics described above. Initially, this testbed will be used to control two teleoperated robots. Features of the robots include anthropomorphic mechanisms, slaving to the testbed, and delivery of sensory feedback to the testbed. The testbed will make use of technologies such as helmet mounted displays, voice recognition, and exoskeleton masters. It will allow tor integration and testing of emerging telepresence technologies along with techniques for coping with control link time delays. Systems developed from this testbed could be applied to ground control of space based robots. During man-tended operations, the Space Station Freedom may benefit from ground control of IVA or EVA robots with science or maintenance tasks. Planetary exploration may also find advanced teleoperation systems to be very useful.

  10. Teleoperation System with Hybrid Pneumatic-Piezoelectric Actuation for MRI-Guided Needle Insertion with Haptic Feedback

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S.

    2014-01-01

    This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N. PMID:25126446

  11. Teleoperation System with Hybrid Pneumatic-Piezoelectric Actuation for MRI-Guided Needle Insertion with Haptic Feedback.

    PubMed

    Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S

    2013-01-01

    This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N.

  12. Multi-Axis Force/Torque Sensor Based on Simply-Supported Beam and Optoelectronics.

    PubMed

    Noh, Yohan; Bimbo, Joao; Sareh, Sina; Wurdemann, Helge; Fraś, Jan; Chathuranga, Damith Suresh; Liu, Hongbin; Housden, James; Althoefer, Kaspar; Rhode, Kawal

    2016-11-17

    This paper presents a multi-axis force/torque sensor based on simply-supported beam and optoelectronic technology. The sensor's main advantages are: (1) Low power consumption; (2) low-level noise in comparison with conventional methods of force sensing (e.g., using strain gauges); (3) the ability to be embedded into different mechanical structures; (4) miniaturisation; (5) simple manufacture and customisation to fit a wide-range of robot systems; and (6) low-cost fabrication and assembly of sensor structure. For these reasons, the proposed multi-axis force/torque sensor can be used in a wide range of application areas including medical robotics, manufacturing, and areas involving human-robot interaction. This paper shows the application of our concept of a force/torque sensor to flexible continuum manipulators: A cylindrical MIS (Minimally Invasive Surgery) robot, and includes its design, fabrication, and evaluation tests.

  13. Small Business Innovation Research (SBIR) Program. FY 1991 Program Solicitation 91.2

    DTIC Science & Technology

    1991-07-01

    Based Robotic Control Systems Technology A91-034 Passive Sensor Self- Interference Cancellation A91-035 High Performance Propelling Charges A91-036...laboratory tests. A91-034 TITLE: Passive Sensor Self- Interference Cancellation CATEGORY: Exploratory Development OBJECTIVE: Develop practical and effective...acoustic sensor to detect, classify, identify, and locate targets is ARMY 19 degraded by own-platform noise and local interference . Elementary

  14. Environmental Recognition and Guidance Control for Autonomous Vehicles using Dual Vision Sensor and Applications

    NASA Astrophysics Data System (ADS)

    Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki

    We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.

  15. A Pneumatic Tactile Sensor for Co-Operative Robots

    PubMed Central

    He, Rui; Yu, Jianjun; Zuo, Guoyu

    2017-01-01

    Tactile sensors of comprehensive functions are urgently needed for the advanced robot to co-exist and co-operate with human beings. Pneumatic tactile sensors based on air bladder possess some noticeable advantages for human-robot interaction application. In this paper, we construct a pneumatic tactile sensor and apply it on the fingertip of robot hand to realize the sensing of force, vibration and slippage via the change of the pressure of the air bladder, and we utilize the sensor to perceive the object’s features such as softness and roughness. The pneumatic tactile sensor has good linearity, repeatability and low hysteresis and both its size and sensing range can be customized by using different material as well as different thicknesses of the air bladder. It is also simple and cheap to fabricate. Therefore, the pneumatic tactile sensor is suitable for the application of co-operative robots and can be widely utilized to improve the performance of service robots. We can apply it to the fingertip of the robot to endow the robotic hand with the ability to co-operate with humans and handle the fragile objects because of the inherent compliance of the air bladder. PMID:29125565

  16. A study of an assisting robot for mandible plastic surgery based on augmented reality.

    PubMed

    Shi, Yunyong; Lin, Li; Zhou, Chaozheng; Zhu, Ming; Xie, Le; Chai, Gang

    2017-02-01

    Mandible plastic surgery plays an important role in conventional plastic surgery. However, its success depends on the experience of the surgeons. In order to improve the effectiveness of the surgery and release the burden of surgeons, a mandible plastic surgery assisting robot, based on an augmented reality technique, was developed. Augmented reality assists surgeons to realize positioning. Fuzzy control theory was used for the control of the motor. During the process of bone drilling, both the drill bit position and the force were measured by a force sensor which was used to estimate the position of the drilling procedure. An animal experiment was performed to verify the effectiveness of the robotic system. The position error was 1.07 ± 0.27 mm and the angle error was 5.59 ± 3.15°. The results show that the system provides a sufficient accuracy with which a precise drilling procedure can be performed. In addition, under the supervision's feedback of the sensor, an adequate safety level can be achieved for the robotic system. The system realizes accurate positioning and automatic drilling to solve the problems encountered in the drilling procedure, providing a method for future plastic surgery.

  17. A Review of Artificial Lateral Line in Sensor Fabrication and Bionic Applications for Robot Fish.

    PubMed

    Liu, Guijie; Wang, Anyi; Wang, Xinbao; Liu, Peng

    2016-01-01

    Lateral line is a system of sense organs that can aid fishes to maneuver in a dark environment. Artificial lateral line (ALL) imitates the structure of lateral line in fishes and provides invaluable means for underwater-sensing technology and robot fish control. This paper reviews ALL, including sensor fabrication and applications to robot fish. The biophysics of lateral line are first introduced to enhance the understanding of lateral line structure and function. The design and fabrication of an ALL sensor on the basis of various sensing principles are then presented. ALL systems are collections of sensors that include carrier and control circuit. Their structure and hydrodynamic detection are reviewed. Finally, further research trends and existing problems of ALL are discussed.

  18. Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation

    NASA Technical Reports Server (NTRS)

    Birge, Brian

    2012-01-01

    The objective is to simulate and optimize distributed cooperation among a network of robots tasked with cooperative excavation on an extra-terrestrial surface. Additionally to examine the concept of directed Emergence among a group of limited artificially intelligent agents. Emergence is the concept of achieving complex results from very simple rules or interactions. For example, in a termite mound each individual termite does not carry a blueprint of how to make their home in a global sense, but their interactions based strictly on local desires create a complex superstructure. Leveraging this Emergence concept applied to a simulation of cooperative agents (robots) will allow an examination of the success of non-directed group strategy achieving specific results. Specifically the simulation will be a testbed to evaluate population based robotic exploration and cooperative strategies while leveraging the evolutionary teamwork approach in the face of uncertainty about the environment and partial loss of sensors. Checking against a cost function and 'social' constraints will optimize cooperation when excavating a simulated tunnel. Agents will act locally with non-local results. The rules by which the simulated robots interact will be optimized to the simplest possible for the desired result, leveraging Emergence. Sensor malfunction and line of sight issues will be incorporated into the simulation. This approach falls under Swarm Robotics, a subset of robot control concerned with finding ways to control large groups of robots. Swarm Robotics often contains biologically inspired approaches, research comes from social insect observation but also data from among groups of herding, schooling, and flocking animals. Biomimetic algorithms applied to manned space exploration is the method under consideration for further study.

  19. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control

    NASA Astrophysics Data System (ADS)

    Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash

    2012-06-01

    This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.

  20. Knowledge-based control for robot self-localization

    NASA Technical Reports Server (NTRS)

    Bennett, Bonnie Kathleen Holte

    1993-01-01

    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.

  1. Controlling free flight of a robotic fly using an onboard vision sensor inspired by insect ocelli

    PubMed Central

    Fuller, Sawyer B.; Karpelson, Michael; Censi, Andrea; Ma, Kevin Y.; Wood, Robert J.

    2014-01-01

    Scaling a flying robot down to the size of a fly or bee requires advances in manufacturing, sensing and control, and will provide insights into mechanisms used by their biological counterparts. Controlled flight at this scale has previously required external cameras to provide the feedback to regulate the continuous corrective manoeuvres necessary to keep the unstable robot from tumbling. One stabilization mechanism used by flying insects may be to sense the horizon or Sun using the ocelli, a set of three light sensors distinct from the compound eyes. Here, we present an ocelli-inspired visual sensor and use it to stabilize a fly-sized robot. We propose a feedback controller that applies torque in proportion to the angular velocity of the source of light estimated by the ocelli. We demonstrate theoretically and empirically that this is sufficient to stabilize the robot's upright orientation. This constitutes the first known use of onboard sensors at this scale. Dipteran flies use halteres to provide gyroscopic velocity feedback, but it is unknown how other insects such as honeybees stabilize flight without these sensory organs. Our results, using a vehicle of similar size and dynamics to the honeybee, suggest how the ocelli could serve this role. PMID:24942846

  2. Intelligent robot trends for factory automation

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.

    1997-09-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent economic and technical trends. The robotics industry now has a billion-dollar market in the U.S. and is growing. Feasibility studies are presented which also show unaudited healthy rates of return for a variety of robotic applications. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. However, the road from inspiration to successful application is still long and difficult, often taking decades to achieve a new product. More cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit both industry and society.

  3. The Design and Implementation of a Semi-Autonomous Surf-Zone Robot Using Advanced Sensors and a Common Robot Operating System

    DTIC Science & Technology

    2011-06-01

    effective way- point navigation algorithm that interfaced with a Java based graphical user interface (GUI), written by Uzun, for a robot named Bender [2...the angular acceleration, θ̈, or angular rate, θ̇. When considering a joint driven by an electric motor, the inertia and friction can be divided into...interactive simulations that can receive input from user controls, scripts , and other applications, such as Excel and MATLAB. One drawback is that the

  4. Bio-inspired vision based robot control using featureless estimations of time-to-contact.

    PubMed

    Zhang, Haijie; Zhao, Jianguo

    2017-01-31

    Marvelous vision based dynamic behaviors of insects and birds such as perching, landing, and obstacle avoidance have inspired scientists to propose the idea of time-to-contact, which is defined as the time for a moving observer to contact an object or surface if the current velocity is maintained. Since with only a vision sensor, time-to-contact can be directly estimated from consecutive images, it is widely used for a variety of robots to fulfill various tasks such as obstacle avoidance, docking, chasing, perching and landing. However, most of existing methods to estimate the time-to-contact need to extract and track features during the control process, which is time-consuming and cannot be applied to robots with limited computation power. In this paper, we adopt a featureless estimation method, extend this method to more general settings with angular velocities, and improve the estimation results using Kalman filtering. Further, we design an error based controller with gain scheduling strategy to control the motion of mobile robots. Experiments for both estimation and control are conducted using a customized mobile robot platform with low-cost embedded systems. Onboard experimental results demonstrate the effectiveness of the proposed approach, with the robot being controlled to successfully dock in front of a vertical wall. The estimation and control methods presented in this paper can be applied to computation-constrained miniature robots for agile locomotion such as landing, docking, or navigation.

  5. A Raman spectroscopy bio-sensor for tissue discrimination in surgical robotics.

    PubMed

    Ashok, Praveen C; Giardini, Mario E; Dholakia, Kishan; Sibbett, Wilson

    2014-01-01

    We report the development of a fiber-based Raman sensor to be used in tumour margin identification during endoluminal robotic surgery. Although this is a generic platform, the sensor we describe was adapted for the ARAKNES (Array of Robots Augmenting the KiNematics of Endoluminal Surgery) robotic platform. On such a platform, the Raman sensor is intended to identify ambiguous tissue margins during robot-assisted surgeries. To maintain sterility of the probe during surgical intervention, a disposable sleeve was specially designed. A straightforward user-compatible interface was implemented where a supervised multivariate classification algorithm was used to classify different tissue types based on specific Raman fingerprints so that it could be used without prior knowledge of spectroscopic data analysis. The protocol avoids inter-patient variability in data and the sensor system is not restricted for use in the classification of a particular tissue type. Representative tissue classification assessments were performed using this system on excised tissue. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Purba, H. A.; Efendi, S.; Fahmi, F.

    2017-03-01

    Fire disasters can occur anytime and result in high losses. It is often that fire fighters cannot access the source of fire due to the damage of building and very high temperature, or even due to the presence of explosive materials. With such constraints and high risk in the handling of the fire, a technological breakthrough that can help fighting the fire is necessary. Our paper proposed the use of robots to extinguish the fire that can be controlled from a specified distance in order to reduce the risk. A fire extinguisher robot was assembled with the intention to extinguish the fire by using a water pump as actuators. The robot movement was controlled using Android smartphones via Wi-fi networks utilizing Wi-fi module contained in the robot. User commands were sent to the microcontroller on the robot and then translated into robotic movement. We used ATMega8 as main microcontroller in the robot. The robot was equipped with cameras and ultrasonic sensors. The camera played role in giving feedback to user and in finding the source of fire. Ultrasonic sensors were used to avoid collisions during movement. Feedback provided by camera on the robot displayed on a screen of smartphone. In lab, testing environment the robot can move following the user command such as turn right, turn left, forward and backward. The ultrasonic sensors worked well that the robot can be stopped at a distance of less than 15 cm. In the fire test, the robot can perform the task properly to extinguish the fire.

  7. Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots

    PubMed Central

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-01-01

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933

  8. Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.

    PubMed

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-10-21

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.

  9. Peg-in-Hole Assembly Based on Two-phase Scheme and F/T Sensor for Dual-arm Robot

    PubMed Central

    Zhang, Xianmin; Zheng, Yanglong; Ota, Jun; Huang, Yanjiang

    2017-01-01

    This paper focuses on peg-in-hole assembly based on a two-phase scheme and force/torque sensor (F/T sensor) for a compliant dual-arm robot, the Baxter robot. The coordinated operations of human beings in assembly applications are applied to the behaviors of the robot. A two-phase assembly scheme is proposed to overcome the inaccurate positioning of the compliant dual-arm robot. The position and orientation of assembly pieces are adjusted respectively in an active compliant manner according to the forces and torques derived by a six degrees-of-freedom (6-DOF) F/T sensor. Experiments are conducted to verify the effectiveness and efficiency of the proposed assembly scheme. The performances of the dual-arm robot are consistent with those of human beings in the peg-in-hole assembly process. The peg and hole with 0.5 mm clearance for round pieces and square pieces can be assembled successfully. PMID:28862691

  10. Peg-in-Hole Assembly Based on Two-phase Scheme and F/T Sensor for Dual-arm Robot.

    PubMed

    Zhang, Xianmin; Zheng, Yanglong; Ota, Jun; Huang, Yanjiang

    2017-09-01

    This paper focuses on peg-in-hole assembly based on a two-phase scheme and force/torque sensor (F/T sensor) for a compliant dual-arm robot, the Baxter robot. The coordinated operations of human beings in assembly applications are applied to the behaviors of the robot. A two-phase assembly scheme is proposed to overcome the inaccurate positioning of the compliant dual-arm robot. The position and orientation of assembly pieces are adjusted respectively in an active compliant manner according to the forces and torques derived by a six degrees-of-freedom (6-DOF) F/T sensor. Experiments are conducted to verify the effectiveness and efficiency of the proposed assembly scheme. The performances of the dual-arm robot are consistent with those of human beings in the peg-in-hole assembly process. The peg and hole with 0.5 mm clearance for round pieces and square pieces can be assembled successfully.

  11. Smart Hand For Manipulators

    NASA Astrophysics Data System (ADS)

    Fiorini, Paolo

    1987-10-01

    Sensor based, computer controlled end effectors for mechanical arms are receiving more and more attention in the robotics industry, because commonly available grippers are only adequate for simple pick and place tasks. This paper describes the current status of the research at JPL on a smart hand for a Puma 560 robot arm. The hand is a self contained, autonomous system, capable of executing high level commands from a supervisory computer. The mechanism consists of parallel fingers, powered by a DC motor, and controlled by a microprocessor embedded in the hand housing. Special sensors are integrated in the hand for measuring the grasp force of the fingers, and for measuring forces and torques applied between the arm and the surrounding environment. Fingers can be exercised under position, velocity and force control modes. The single-chip microcomputer in the hand executes the tasks of communication, data acquisition and sensor based motor control, with a sample cycle of 2 ms and a transmission rate of 9600 baud. The smart hand described in this paper represents a new development in the area of end effector design because of its multi-functionality and autonomy. It will also be a versatile test bed for experimenting with advanced control schemes for dexterous manipulation.

  12. Adaptive Trajectory Tracking of Nonholonomic Mobile Robots Using Vision-Based Position and Velocity Estimation.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Jiang, Tianjiao; Wang, Kai; Fang, Mu

    2018-02-01

    Despite tremendous efforts made for years, trajectory tracking control (TC) of a nonholonomic mobile robot (NMR) without global positioning system remains an open problem. The major reason is the difficulty to localize the robot by using its onboard sensors only. In this paper, a newly designed adaptive trajectory TC method is proposed for the NMR without its position, orientation, and velocity measurements. The controller is designed on the basis of a novel algorithm to estimate position and velocity of the robot online from visual feedback of an omnidirectional camera. It is theoretically proved that the proposed algorithm yields the TC errors to asymptotically converge to zero. Real-world experiments are conducted on a wheeled NMR to validate the feasibility of the control system.

  13. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    NASA Technical Reports Server (NTRS)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  14. Bio-inspired grasp control in a robotic hand with massive sensorial input.

    PubMed

    Ascari, Luca; Bertocchi, Ulisse; Corradi, Paolo; Laschi, Cecilia; Dario, Paolo

    2009-02-01

    The capability of grasping and lifting an object in a suitable, stable and controlled way is an outstanding feature for a robot, and thus far, one of the major problems to be solved in robotics. No robotic tools able to perform an advanced control of the grasp as, for instance, the human hand does, have been demonstrated to date. Due to its capital importance in science and in many applications, namely from biomedics to manufacturing, the issue has been matter of deep scientific investigations in both the field of neurophysiology and robotics. While the former is contributing with a profound understanding of the dynamics of real-time control of the slippage and grasp force in the human hand, the latter tries more and more to reproduce, or take inspiration by, the nature's approach, by means of hardware and software technology. On this regard, one of the major constraints robotics has to overcome is the real-time processing of a large amounts of data generated by the tactile sensors while grasping, which poses serious problems to the available computational power. In this paper a bio-inspired approach to tactile data processing has been followed in order to design and test a hardware-software robotic architecture that works on the parallel processing of a large amount of tactile sensing signals. The working principle of the architecture bases on the cellular nonlinear/neural network (CNN) paradigm, while using both hand shape and spatial-temporal features obtained from an array of microfabricated force sensors, in order to control the sensory-motor coordination of the robotic system. Prototypical grasping tasks were selected to measure the system performances applied to a computer-interfaced robotic hand. Successful grasps of several objects, completely unknown to the robot, e.g. soft and deformable objects like plastic bottles, soft balls, and Japanese tofu, have been demonstrated.

  15. System Design and Locomotion of Superball, an Untethered Tensegrity Robot

    NASA Technical Reports Server (NTRS)

    Sabelhaus, Andrew P.; Bruce, Jonathan; Caluwaerts, Ken; Manovi, Pavlo; Firoozi, Roya Fallah; Dobi, Sarah; Agogino, Alice M.; Sunspiral, Vytas

    2015-01-01

    The Spherical Underactuated Planetary Exploration Robot ball (SUPERball) is an ongoing project within NASA Ames Research Center's Intelligent Robotics Group and the Dynamic Tensegrity Robotics Lab (DTRL). The current SUPERball is the first full prototype of this tensegrity robot platform, eventually destined for space exploration missions. This work, building on prior published discussions of individual components, presents the fully-constructed robot. Various design improvements are discussed, as well as testing results of the sensors and actuators that illustrate system performance. Basic low-level motor position controls are implemented and validated against sensor data, which show SUPERball to be uniquely suited for highly dynamic state trajectory tracking. Finally, SUPERball is shown in a simple example of locomotion. This implementation of a basic motion primitive shows SUPERball in untethered control.

  16. Robotic System For Greenhouse Or Nursery

    NASA Technical Reports Server (NTRS)

    Gill, Paul; Montgomery, Jim; Silver, John; Heffelfinger, Neil; Simonton, Ward; Pease, Jim

    1993-01-01

    Report presents additional information about robotic system described in "Robotic Gripper With Force Control And Optical Sensors" (MFS-28537). "Flexible Agricultural Robotics Manipulator System" (FARMS) serves as prototype of robotic systems intended to enhance productivities of agricultural assembly-line-type facilities in large commercial greenhouses and nurseries.

  17. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    PubMed Central

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  18. Speed control for a mobile robot

    NASA Astrophysics Data System (ADS)

    Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.

  19. Learning for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.

  20. Robotics research projects report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsia, T.C.

    The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)

  1. A Review of Artificial Lateral Line in Sensor Fabrication and Bionic Applications for Robot Fish

    PubMed Central

    Wang, Anyi; Wang, Xinbao; Liu, Peng

    2016-01-01

    Lateral line is a system of sense organs that can aid fishes to maneuver in a dark environment. Artificial lateral line (ALL) imitates the structure of lateral line in fishes and provides invaluable means for underwater-sensing technology and robot fish control. This paper reviews ALL, including sensor fabrication and applications to robot fish. The biophysics of lateral line are first introduced to enhance the understanding of lateral line structure and function. The design and fabrication of an ALL sensor on the basis of various sensing principles are then presented. ALL systems are collections of sensors that include carrier and control circuit. Their structure and hydrodynamic detection are reviewed. Finally, further research trends and existing problems of ALL are discussed. PMID:28115825

  2. Conference on Space and Military Applications of Automation and Robotics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.

  3. Multi-Axis Force/Torque Sensor Based on Simply-Supported Beam and Optoelectronics

    PubMed Central

    Noh, Yohan; Bimbo, Joao; Sareh, Sina; Wurdemann, Helge; Fraś, Jan; Chathuranga, Damith Suresh; Liu, Hongbin; Housden, James; Althoefer, Kaspar; Rhode, Kawal

    2016-01-01

    This paper presents a multi-axis force/torque sensor based on simply-supported beam and optoelectronic technology. The sensor’s main advantages are: (1) Low power consumption; (2) low-level noise in comparison with conventional methods of force sensing (e.g., using strain gauges); (3) the ability to be embedded into different mechanical structures; (4) miniaturisation; (5) simple manufacture and customisation to fit a wide-range of robot systems; and (6) low-cost fabrication and assembly of sensor structure. For these reasons, the proposed multi-axis force/torque sensor can be used in a wide range of application areas including medical robotics, manufacturing, and areas involving human–robot interaction. This paper shows the application of our concept of a force/torque sensor to flexible continuum manipulators: A cylindrical MIS (Minimally Invasive Surgery) robot, and includes its design, fabrication, and evaluation tests. PMID:27869689

  4. Urban search mobile platform modeling in hindered access conditions

    NASA Astrophysics Data System (ADS)

    Barankova, I. I.; Mikhailova, U. V.; Kalugina, O. B.; Barankov, V. V.

    2018-05-01

    The article explores the control system simulation and the design of the experimental model of the rescue robot mobile platform. The functional interface, a structural functional diagram of the mobile platform control unit, and a functional control scheme for the mobile platform of secure robot were modeled. The task of design a mobile platform for urban searching in hindered access conditions is realized through the use of a mechanical basis with a chassis and crawler drive, a warning device, human heat sensors and a microcontroller based on Arduino platforms.

  5. A PC-Based Controller for Dextrous Arms

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Seraji, Homayoun; Long, Mark

    1996-01-01

    This paper describes the architecture and performance of a PC-based controller for 7-DOF dextrous manipulators. The computing platform is a 486-based personal computer equipped with a bus extender to access the robot Multibus controller, together with a single board computer as the graphical engine, and with a parallel I/O board to interface with a force-torque sensor mounted on the manipulator wrist.

  6. Intelligent robot trends for 1998

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.

    1998-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent technical and economic trends. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has a 1.1 billion-dollar market in the U.S. and is growing. Feasibility studies results are presented which also show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society.

  7. A Novel Tactile Sensor with Electromagnetic Induction and Its Application on Stick-Slip Interaction Detection

    PubMed Central

    Liu, Yanjie; Han, Haijun; Liu, Tao; Yi, Jingang; Li, Qingguo; Inoue, Yoshio

    2016-01-01

    Real-time detection of contact states, such as stick-slip interaction between a robot and an object on its end effector, is crucial for the robot to grasp and manipulate the object steadily. This paper presents a novel tactile sensor based on electromagnetic induction and its application on stick-slip interaction. An equivalent cantilever-beam model of the tactile sensor was built and capable of constructing the relationship between the sensor output and the friction applied on the sensor. With the tactile sensor, a new method to detect stick-slip interaction on the contact surface between the object and the sensor is proposed based on the characteristics of friction change. Furthermore, a prototype was developed for a typical application, stable wafer transferring on a wafer transfer robot, by considering the spatial magnetic field distribution and the sensor size according to the requirements of wafer transfer. The experimental results validate the sensing mechanism of the tactile sensor and verify its feasibility of detecting stick-slip on the contact surface between the wafer and the sensor. The sensing mechanism also provides a new approach to detect the contact state on the soft-rigid surface in other robot-environment interaction systems. PMID:27023545

  8. Assisted Perception, Planning and Control for Remote Mobility and Dexterous Manipulation

    DTIC Science & Technology

    2017-04-01

    on unmanned aerial vehicles (UAVs). The underlying algorithm is based on an Extended Kalman Filter (EKF) that simultaneously estimates robot state...and sensor biases. The filter developed provided a probabilistic fusion of sensor data from many modalities to produce a single consistent position...estimation for a walking humanoid. Given a prior map using a Gaussian particle filter , the LIDAR based system is able to provide a drift-free

  9. Fused smart sensor network for multi-axis forward kinematics estimation in industrial robots.

    PubMed

    Rodriguez-Donate, Carlos; Osornio-Rios, Roque Alfredo; Rivera-Guillen, Jesus Rooney; Romero-Troncoso, Rene de Jesus

    2011-01-01

    Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of high-computational load, which avoids the online measurement of the joint's angular position and the online forward kinematics estimation. The contribution of this work is to propose a fused smart sensor network to estimate the forward kinematics of an industrial robot. The developed smart processor uses Kalman filters to filter and to fuse the information of the sensor network. Two primary sensors are used: an optical encoder, and a 3-axis accelerometer. In order to obtain the position and orientation of each joint online a field-programmable gate array (FPGA) is used in the hardware implementation taking advantage of the parallel computation capabilities and reconfigurability of this device. With the aim of evaluating the smart sensor network performance, three real-operation-oriented paths are executed and monitored in a 6-degree of freedom robot.

  10. Cooperative system and method using mobile robots for testing a cooperative search controller

    DOEpatents

    Byrne, Raymond H.; Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.

    2002-01-01

    A test system for testing a controller provides a way to use large numbers of miniature mobile robots to test a cooperative search controller in a test area, where each mobile robot has a sensor, a communication device, a processor, and a memory. A method of using a test system provides a way for testing a cooperative search controller using multiple robots sharing information and communicating over a communication network.

  11. A Soft Parallel Kinematic Mechanism.

    PubMed

    White, Edward L; Case, Jennifer C; Kramer-Bottiglio, Rebecca

    2018-02-01

    In this article, we describe a novel holonomic soft robotic structure based on a parallel kinematic mechanism. The design is based on the Stewart platform, which uses six sensors and actuators to achieve full six-degree-of-freedom motion. Our design is much less complex than a traditional platform, since it replaces the 12 spherical and universal joints found in a traditional Stewart platform with a single highly deformable elastomer body and flexible actuators. This reduces the total number of parts in the system and simplifies the assembly process. Actuation is achieved through coiled-shape memory alloy actuators. State observation and feedback is accomplished through the use of capacitive elastomer strain gauges. The main structural element is an elastomer joint that provides antagonistic force. We report the response of the actuators and sensors individually, then report the response of the complete assembly. We show that the completed robotic system is able to achieve full position control, and we discuss the limitations associated with using responsive material actuators. We believe that control demonstrated on a single body in this work could be extended to chains of such bodies to create complex soft robots.

  12. Design of a Soft Robot with Multiple Motion Patterns Using Soft Pneumatic Actuators

    NASA Astrophysics Data System (ADS)

    Miao, Yu; Dong, Wei; Du, Zhijiang

    2017-11-01

    Soft robots are made of soft materials and have good flexibility and infinite degrees of freedom in theory. These properties enable soft robots to work in narrow space and adapt to external environment. In this paper, a 2-DOF soft pneumatic actuator is introduced, with two chambers symmetrically distributed on both sides and a jamming cylinder along the axis. Fibers are used to constrain the expansion of the soft actuator. Experiments are carried out to test the performance of the soft actuator, including bending and elongation characteristics. A soft robot is designed and fabricated by connecting four soft pneumatic actuators to a 3D-printed board. The soft robotic system is then established. The pneumatic circuit is built by pumps and solenoid valves. The control system is based on the control board Arduino Mega 2560. Relay modules are used to control valves and pressure sensors are used to measure pressure in the pneumatic circuit. Experiments are conducted to test the performance of the proposed soft robot.

  13. Research of the master-slave robot surgical system with the function of force feedback.

    PubMed

    Shi, Yunyong; Zhou, Chaozheng; Xie, Le; Chen, Yongjun; Jiang, Jun; Zhang, Zhenfeng; Deng, Ze

    2017-12-01

    Surgical robots lack force feedback, which may lead to operation errors. In order to improve surgical outcomes, this research developed a new master-slave surgical robot, which was designed with an integrated force sensor. The new structure designed for the master-slave robot employs a force feedback mechanism. A six-dimensional force sensor was mounted on the tip of the slave robot's actuator. Sliding model control was adopted to control the slave robot. According to the movement of the master system manipulated by the surgeon, the slave's movement and the force feedback function were validated. The motion was completed, the standard deviation was calculated, and the force data were detected. Hence, force feedback was realized in the experiment. The surgical robot can help surgeons to complete trajectory motions with haptic sensation. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Utilizing Glove-Based Gestures and a Tactile Vest Display for Covert Communications and Robot Control

    DTIC Science & Technology

    2014-06-01

    transmitted from a controller mechanism that contains inertial measurement unit ( IMU ) sensors to sense rotation and acceleration of movement. Earlier...assets, and standard hand signal commands can be presented to human team members via a variety of modalities. IMU sensor technologies placed on the body...obstacle event (e.g., climbing, crawling, combat roll , running) and between obstacles (i.e., walking). The following analyses are for each task

  15. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  16. Dynamics, control and sensor issues pertinent to robotic hands for the EVA retriever system

    NASA Technical Reports Server (NTRS)

    Mclauchlan, Robert A.

    1987-01-01

    Basic dynamics, sensor, control, and related artificial intelligence issues pertinent to smart robotic hands for the Extra Vehicular Activity (EVA) Retriever system are summarized and discussed. These smart hands are to be used as end effectors on arms attached to manned maneuvering units (MMU). The Retriever robotic systems comprised of MMU, arm and smart hands, are being developed to aid crewmen in the performance of routine EVA tasks including tool and object retrieval. The ultimate goal is to enhance the effectiveness of EVA crewmen.

  17. Hierarchical Shared Control of Cane-Type Walking-Aid Robot

    PubMed Central

    Tao, Chunjing

    2017-01-01

    A hierarchical shared-control method of the walking-aid robot for both human motion intention recognition and the obstacle emergency-avoidance method based on artificial potential field (APF) is proposed in this paper. The human motion intention is obtained from the interaction force measurements of the sensory system composed of 4 force-sensing registers (FSR) and a torque sensor. Meanwhile, a laser-range finder (LRF) forward is applied to detect the obstacles and try to guide the operator based on the repulsion force calculated by artificial potential field. An obstacle emergency-avoidance method which comprises different control strategies is also assumed according to the different states of obstacles or emergency cases. To ensure the user's safety, the hierarchical shared-control method combines the intention recognition method with the obstacle emergency-avoidance method based on the distance between the walking-aid robot and the obstacles. At last, experiments validate the effectiveness of the proposed hierarchical shared-control method. PMID:29093805

  18. Modeling Analysis of a Six-axis Force/Tactile Sensor for Robot Fingers and a Method for Decreasing Error

    NASA Astrophysics Data System (ADS)

    Luo, Minghua; Shimizu, Etsuro; Zhang, Feifei; Ito, Masanori

    This paper describes a six-axis force/tactile sensor for robot fingers. A mathematical model of this sensor is proposed. By this model, the grasping force and its moments, and touching position of robot finger for holding an object can be calculated. A new sensor is fabricated based on this model, where the elastic sensing unit of the sensor is made of a brazen plate. A new compensating method for decreasing error is proposed. Furthermore, the performance of this sensor is examined. The test results present approximate relationship between theoretical input and output of the sensor. It is obvious that the performance of the new sensor is better than the sensor with no compensation.

  19. A Search-and-Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment

    PubMed Central

    Gao, Junyao; Zhao, Fangzhou; Liu, Yi

    2017-01-01

    This paper introduces a search-and-rescue robot system used for remote sensing of the underground coal mine environment, which is composed of an operating control unit and two mobile robots with explosion-proof and waterproof function. This robot system is designed to observe and collect information of the coal mine environment through remote control. Thus, this system can be regarded as a multifunction sensor, which realizes remote sensing. When the robot system detects danger, it will send out signals to warn rescuers to keep away. The robot consists of two gas sensors, two cameras, a two-way audio, a 1 km-long fiber-optic cable for communication and a mechanical explosion-proof manipulator. Especially, the manipulator is a novel explosion-proof manipulator for cleaning obstacles, which has 3-degree-of-freedom, but is driven by two motors. Furthermore, the two robots can communicate in series for 2 km with the operating control unit. The development of the robot system may provide a reference for developing future search-and-rescue systems. PMID:29065560

  20. Evolutionary online behaviour learning and adaptation in real robots.

    PubMed

    Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne

    2017-07-01

    Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.

  1. Training a Network of Electronic Neurons for Control of a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Vromen, T. G. M.; Steur, E.; Nijmeijer, H.

    An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.

  2. Design of a Vision-Based Sensor for Autonomous Pig House Cleaning

    NASA Astrophysics Data System (ADS)

    Braithwaite, Ian; Blanke, Mogens; Zhang, Guo-Qiang; Carstensen, Jens Michael

    2005-12-01

    Current pig house cleaning procedures are hazardous to the health of farm workers, and yet necessary if the spread of disease between batches of animals is to be satisfactorily controlled. Autonomous cleaning using robot technology offers salient benefits. This paper addresses the feasibility of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas with a low probability of misclassification. A Bayesian discriminator is shown to be efficient in this context and implementation of a prototype tool demonstrates the feasibility of designing a low-cost vision-based sensor for autonomous cleaning.

  3. Software development to support sensor control of robot arc welding

    NASA Technical Reports Server (NTRS)

    Silas, F. R., Jr.

    1986-01-01

    The development of software for a Digital Equipment Corporation MINC-23 Laboratory Computer to provide functions of a workcell host computer for Space Shuttle Main Engine (SSME) robotic welding is documented. Routines were written to transfer robot programs between the MINC and an Advanced Robotic Cyro 750 welding robot. Other routines provide advanced program editing features while additional software allows communicatin with a remote computer aided design system. Access to special robot functions were provided to allow advanced control of weld seam tracking and process control for future development programs.

  4. Tool actuation and force feedback on robot-assisted microsurgery system

    NASA Technical Reports Server (NTRS)

    Das, Hari (Inventor); Ohm, Tim R. (Inventor); Boswell, Curtis D. (Inventor); Steele, Robert D. (Inventor)

    2002-01-01

    An input control device with force sensors is configured to sense hand movements of a surgeon performing a robot-assisted microsurgery. The sensed hand movements actuate a mechanically decoupled robot manipulator. A microsurgical manipulator, attached to the robot manipulator, is activated to move small objects and perform microsurgical tasks. A force-feedback element coupled to the robot manipulator and the input control device provides the input control device with an amplified sense of touch in the microsurgical manipulator.

  5. Application of historical mobility testing to sensor-based robotic performance

    NASA Astrophysics Data System (ADS)

    Willoughby, William E.; Jones, Randolph A.; Mason, George L.; Shoop, Sally A.; Lever, James H.

    2006-05-01

    The USA Engineer Research and Development Center (ERDC) has conducted on-/off-road experimental field testing with full-sized and scale-model military vehicles for more than fifty years. Some 4000 acres of local terrain are available for tailored field evaluations or verification/validation of future robotic designs in a variety of climatic regimes. Field testing and data collection procedures, as well as techniques for quantifying terrain in engineering terms, have been developed and refined into algorithms and models for predicting vehicle-terrain interactions and resulting forces or speeds of military-sized vehicles. Based on recent experiments with Matilda, Talon, and Pacbot, these predictive capabilities appear to be relevant to most robotic systems currently in development. Utilization of current testing capabilities with sensor-based vehicle drivers, or use of the procedures for terrain quantification from sensor data, would immediately apply some fifty years of historical knowledge to the development, refinement, and implementation of future robotic systems. Additionally, translation of sensor-collected terrain data into engineering terms would allow assessment of robotic performance a priori deployment of the actual system and ensure maximum system performance in the theater of operation.

  6. Methods and Apparatus for Autonomous Robotic Control

    NASA Technical Reports Server (NTRS)

    Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)

    2017-01-01

    Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.

  7. Robotic Thumb Assembly

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Bridgwater, Lyndon (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Goza, S. Michael (Inventor)

    2013-01-01

    An improved robotic thumb for a robotic hand assembly is provided. According to one aspect of the disclosure, improved tendon routing in the robotic thumb provides control of four degrees of freedom with only five tendons. According to another aspect of the disclosure, one of the five degrees of freedom of a human thumb is replaced in the robotic thumb with a permanent twist in the shape of a phalange. According to yet another aspect of the disclosure, a position sensor includes a magnet having two portions shaped as circle segments with different center points. The magnet provides a linearized output from a Hall effect sensor.

  8. Mathematical Modeling Of The Terrain Around A Robot

    NASA Technical Reports Server (NTRS)

    Slack, Marc G.

    1992-01-01

    In conceptual system for modeling of terrain around autonomous mobile robot, representation of terrain used for control separated from representation provided by sensors. Concept takes motion-planning system out from under constraints imposed by discrete spatial intervals of square terrain grid(s). Separation allows sensing and motion-controlling systems to operate asynchronously; facilitating integration of new map and sensor data into planning of motions.

  9. A learning controller for nonrepetitive robotic operation

    NASA Technical Reports Server (NTRS)

    Miller, W. T., III

    1987-01-01

    A practical learning control system is described which is applicable to complex robotic and telerobotic systems involving multiple feedback sensors and multiple command variables. In the controller, the learning algorithm is used to learn to reproduce the nonlinear relationship between the sensor outputs and the system command variables over particular regions of the system state space, rather than learning the actuator commands required to perform a specific task. The learned information is used to predict the command signals required to produce desired changes in the sensor outputs. The desired sensor output changes may result from automatic trajectory planning or may be derived from interactive input from a human operator. The learning controller requires no a priori knowledge of the relationships between the sensor outputs and the command variables. The algorithm is well suited for real time implementation, requiring only fixed point addition and logical operations. The results of learning experiments using a General Electric P-5 manipulator interfaced to a VAX-11/730 computer are presented. These experiments involved interactive operator control, via joysticks, of the position and orientation of an object in the field of view of a video camera mounted on the end of the robot arm.

  10. EMG-Torque correction on Human Upper extremity using Evolutionary Computation

    NASA Astrophysics Data System (ADS)

    JL, Veronica; Parasuraman, S.; Khan, M. K. A. Ahamed; Jeba DSingh, Kingsly

    2016-09-01

    There have been many studies indicating that control system of rehabilitative robot plays an important role in determining the outcome of the therapy process. Existing works have done the prediction of feedback signal in the controller based on the kinematics parameters and EMG readings of upper limb's skeletal system. Kinematics and kinetics based control signal system is developed by reading the output of the sensors such as position sensor, orientation sensor and F/T (Force/Torque) sensor and there readings are to be compared with the preceding measurement to decide on the amount of assistive force. There are also other works that incorporated the kinematics parameters to calculate the kinetics parameters via formulation and pre-defined assumptions. Nevertheless, these types of control signals analyze the movement of the upper limb only based on the movement of the upper joints. They do not anticipate the possibility of muscle plasticity. The focus of the paper is to make use of the kinematics parameters and EMG readings of skeletal system to predict the individual torque of upper extremity's joints. The surface EMG signals are fed into different mathematical models so that these data can be trained through Genetic Algorithm (GA) to find the best correlation between EMG signals and torques acting on the upper limb's joints. The estimated torque attained from the mathematical models is called simulated output. The simulated output will then be compared with the actual individual joint which is calculated based on the real time kinematics parameters of the upper movement of the skeleton when the muscle cells are activated. The findings from this contribution are extended into the development of the active control signal based controller for rehabilitation robot.

  11. Mapping planetary caves with an autonomous, heterogeneous robot team

    NASA Astrophysics Data System (ADS)

    Husain, Ammar; Jones, Heather; Kannan, Balajee; Wong, Uland; Pimentel, Tiago; Tang, Sarah; Daftry, Shreyansh; Huber, Steven; Whittaker, William L.

    Caves on other planetary bodies offer sheltered habitat for future human explorers and numerous clues to a planet's past for scientists. While recent orbital imagery provides exciting new details about cave entrances on the Moon and Mars, the interiors of these caves are still unknown and not observable from orbit. Multi-robot teams offer unique solutions for exploration and modeling subsurface voids during precursor missions. Robot teams that are diverse in terms of size, mobility, sensing, and capability can provide great advantages, but this diversity, coupled with inherently distinct low-level behavior architectures, makes coordination a challenge. This paper presents a framework that consists of an autonomous frontier and capability-based task generator, a distributed market-based strategy for coordinating and allocating tasks to the different team members, and a communication paradigm for seamless interaction between the different robots in the system. Robots have different sensors, (in the representative robot team used for testing: 2D mapping sensors, 3D modeling sensors, or no exteroceptive sensors), and varying levels of mobility. Tasks are generated to explore, model, and take science samples. Based on an individual robot's capability and associated cost for executing a generated task, a robot is autonomously selected for task execution. The robots create coarse online maps and store collected data for high resolution offline modeling. The coordination approach has been field tested at a mock cave site with highly-unstructured natural terrain, as well as an outdoor patio area. Initial results are promising for applicability of the proposed multi-robot framework to exploration and modeling of planetary caves.

  12. Open core control software for surgical robots.

    PubMed

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.

  13. Improving the position control of a two degrees of freedom robotic sensing antenna using fractional-order controllers

    NASA Astrophysics Data System (ADS)

    Feliu-Talegon, D.; Feliu-Batlle, V.

    2017-06-01

    Flexible links combined with force and torque sensors can be used to detect obstacles in mobile robotics, as well as for surface and object recognition. These devices, called sensing antennae, perform an active sensing strategy in which a servomotor system moves the link back and forth until it hits an object. At this instant, information of the motor angles combined with force and torque measurements allow calculating the positions of the hitting points, which are valuable information about the object surface. In order to move the antenna fast and accurately, this article proposes a new closed-loop control for driving this flexible link-based sensor. The control strategy is based on combining a feedforward term and a feedback phase-lag compensator of fractional order. We demonstrate that some drawbacks of the control of these sensing devices like the apparition of spillover effects when a very fast positioning of the antenna tip is desired, and actuator saturation caused by high-frequency sensor noise, can be significantly reduced by using our newly proposed fractional-order controllers. We have applied these controllers to the position control of a prototype of sensing antenna and experiments have shown the improvements attained with this technique in the accurate and vibration free motion of its tip (the fractional-order controller reduced ten times the residual vibration obtained with the integer-order controller).

  14. INL Generic Robot Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The INL Generic Robot Architecture is a generic, extensible software framework that can be applied across a variety of different robot geometries, sensor suites and low-level proprietary control application programming interfaces (e.g. mobility, aria, aware, player, etc.).

  15. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  16. Tele-Supervised Adaptive Ocean Sensor Fleet

    NASA Technical Reports Server (NTRS)

    Lefes, Alberto; Podnar, Gregg W.; Dolan, John M.; Hosler, Jeffrey C.; Ames, Troy J.

    2009-01-01

    The Tele-supervised Adaptive Ocean Sensor Fleet (TAOSF) is a multi-robot science exploration architecture and system that uses a group of robotic boats (the Ocean-Atmosphere Sensor Integration System, or OASIS) to enable in-situ study of ocean surface and subsurface characteristics and the dynamics of such ocean phenomena as coastal pollutants, oil spills, hurricanes, or harmful algal blooms (HABs). The OASIS boats are extended- deployment, autonomous ocean surface vehicles. The TAOSF architecture provides an integrated approach to multi-vehicle coordination and sliding human-vehicle autonomy. One feature of TAOSF is the adaptive re-planning of the activities of the OASIS vessels based on sensor input ( smart sensing) and sensorial coordination among multiple assets. The architecture also incorporates Web-based communications that permit control of the assets over long distances and the sharing of data with remote experts. Autonomous hazard and assistance detection allows the automatic identification of hazards that require human intervention to ensure the safety and integrity of the robotic vehicles, or of science data that require human interpretation and response. Also, the architecture is designed for science analysis of acquired data in order to perform an initial onboard assessment of the presence of specific science signatures of immediate interest. TAOSF integrates and extends five subsystems developed by the participating institutions: Emergent Space Tech - nol ogies, Wallops Flight Facility, NASA s Goddard Space Flight Center (GSFC), Carnegie Mellon University, and Jet Propulsion Laboratory (JPL). The OASIS Autonomous Surface Vehicle (ASV) system, which includes the vessels as well as the land-based control and communications infrastructure developed for them, controls the hardware of each platform (sensors, actuators, etc.), and also provides a low-level waypoint navigation capability. The Multi-Platform Simulation Environment from GSFC is a surrogate for the OASIS ASV system and allows for independent development and testing of higher-level software components. The Platform Communicator acts as a proxy for both actual and simulated platforms. It translates platform-independent messages from the higher control systems to the device-dependent communication protocols. This enables the higher-level control systems to interact identically with heterogeneous actual or simulated platforms.

  17. Navigation system for autonomous mapper robots

    NASA Astrophysics Data System (ADS)

    Halbach, Marc; Baudoin, Yvan

    1993-05-01

    This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.

  18. The MITy micro-rover: Sensing, control, and operation

    NASA Technical Reports Server (NTRS)

    Malafeew, Eric; Kaliardos, William

    1994-01-01

    The sensory, control, and operation systems of the 'MITy' Mars micro-rover are discussed. It is shown that the customized sun tracker and laser rangefinder provide internal, autonomous dead reckoning and hazard detection in unstructured environments. The micro-rover consists of three articulated platforms with sensing, processing and payload subsystems connected by a dual spring suspension system. A reactive obstacle avoidance routine makes intelligent use of robot-centered laser information to maneuver through cluttered environments. The hazard sensors include a rangefinder, inclinometers, proximity sensors and collision sensors. A 486/66 laptop computer runs the graphical user interface and programming environment. A graphical window displays robot telemetry in real time and a small TV/VCR is used for real time supervisory control. Guidance, navigation, and control routines work in conjunction with the mapping and obstacle avoidance functions to provide heading and speed commands that maneuver the robot around obstacles and towards the target.

  19. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  20. A situated reasoning architecture for space-based repair and replace tasks

    NASA Technical Reports Server (NTRS)

    Bloom, Ben; Mcgrath, Debra; Sanborn, Jim

    1989-01-01

    Space-based robots need low level control for collision detection and avoidance, short-term load management, fine-grained motion, and other physical tasks. In addition, higher level control is required to focus strategic decision making as missions are assigned and carried out. Reasoning and control must be responsive to ongoing changes in the environment. Research aimed at bridging the gap between high level artificial intelligence (AI) planning techniques and task-level robot programming for telerobotic systems is described. Situated reasoning is incorporated into AI and Robotics systems in order to coordinate a robot's activity within its environment. An integrated system under development in a component maintenance domain is described. It is geared towards replacing worn and/or failed Orbital Replacement Units (ORUs) designed for use aboard NASA's Space Station Freedom based on the collection of components available at a given time. High level control reasons in component space in order to maximize the number operational component-cells over time, while the task-level controls sensors and effectors, detects collisions, and carries out pick and place tasks in physical space. Situated reasoning is used throughout the system to cope with component failures, imperfect information, and unexpected events.

  1. An MRI-Guided Telesurgery System Using a Fabry-Perot Interferometry Force Sensor and a Pneumatic Haptic Device.

    PubMed

    Su, Hao; Shang, Weijian; Li, Gang; Patel, Niravkumar; Fischer, Gregory S

    2017-08-01

    This paper presents a surgical master-slave teleoperation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. The slave robot consists of a piezoelectrically actuated 6-degree-of-freedom (DOF) robot for needle placement with an integrated fiber optic force sensor (1-DOF axial force measurement) using the Fabry-Perot interferometry (FPI) sensing principle; it is configured to operate inside the bore of the MRI scanner during imaging. By leveraging the advantages of pneumatic and piezoelectric actuation in force and position control respectively, we have designed a pneumatically actuated master robot (haptic device) with strain gauge based force sensing that is configured to operate the slave from within the scanner room during imaging. The slave robot follows the insertion motion of the haptic device while the haptic device displays the needle insertion force as measured by the FPI sensor. Image interference evaluation demonstrates that the telesurgery system presents a signal to noise ratio reduction of less than 17% and less than 1% geometric distortion during simultaneous robot motion and imaging. Teleoperated needle insertion and rotation experiments were performed to reach 10 targets in a soft tissue-mimicking phantom with 0.70 ± 0.35 mm Cartesian space error.

  2. Cognitive patterns: giving autonomy some context

    NASA Astrophysics Data System (ADS)

    Dumond, Danielle; Stacy, Webb; Geyer, Alexandra; Rousseau, Jeffrey; Therrien, Mike

    2013-05-01

    Today's robots require a great deal of control and supervision, and are unable to intelligently respond to unanticipated and novel situations. Interactions between an operator and even a single robot take place exclusively at a very low, detailed level, in part because no contextual information about a situation is conveyed or utilized to make the interaction more effective and less time consuming. Moreover, the robot control and sensing systems do not learn from experience and, therefore, do not become better with time or apply previous knowledge to new situations. With multi-robot teams, human operators, in addition to managing the low-level details of navigation and sensor management while operating single robots, are also required to manage inter-robot interactions. To make the most use of robots in combat environments, it will be necessary to have the capability to assign them new missions (including providing them context information), and to have them report information about the environment they encounter as they proceed with their mission. The Cognitive Patterns Knowledge Generation system (CPKG) has the ability to connect to various knowledge-based models, multiple sensors, and to a human operator. The CPKG system comprises three major internal components: Pattern Generation, Perception/Action, and Adaptation, enabling it to create situationally-relevant abstract patterns, match sensory input to a suitable abstract pattern in a multilayered top-down/bottom-up fashion similar to the mechanisms used for visual perception in the brain, and generate new abstract patterns. The CPKG allows the operator to focus on things other than the operation of the robot(s).

  3. Wireless intraoral tongue control of an assistive robotic arm for individuals with tetraplegia.

    PubMed

    Andreasen Struijk, Lotte N S; Egsgaard, Line Lindhardt; Lontis, Romulus; Gaihede, Michael; Bentsen, Bo

    2017-11-06

    For an individual with tetraplegia assistive robotic arms provide a potentially invaluable opportunity for rehabilitation. However, there is a lack of available control methods to allow these individuals to fully control the assistive arms. Here we show that it is possible for an individual with tetraplegia to use the tongue to fully control all 14 movements of an assistive robotic arm in a three dimensional space using a wireless intraoral control system, thus allowing for numerous activities of daily living. We developed a tongue-based robotic control method incorporating a multi-sensor inductive tongue interface. One abled-bodied individual and one individual with tetraplegia performed a proof of concept study by controlling the robot with their tongue using direct actuator control and endpoint control, respectively. After 30 min of training, the able-bodied experimental participant tongue controlled the assistive robot to pick up a roll of tape in 80% of the attempts. Further, the individual with tetraplegia succeeded in fully tongue controlling the assistive robot to reach for and touch a roll of tape in 100% of the attempts and to pick up the roll in 50% of the attempts. Furthermore, she controlled the robot to grasp a bottle of water and pour its contents into a cup; her first functional action in 19 years. To our knowledge, this is the first time that an individual with tetraplegia has been able to fully control an assistive robotic arm using a wireless intraoral tongue interface. The tongue interface used to control the robot is currently available for control of computers and of powered wheelchairs, and the robot employed in this study is also commercially available. Therefore, the presented results may translate into available solutions within reasonable time.

  4. Perception for mobile robot navigation: A survey of the state of the art

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David

    1994-01-01

    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.

  5. A traffic priority language for collision-free navigation of autonomous mobile robots in dynamic environments.

    PubMed

    Bourbakis, N G

    1997-01-01

    This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.

  6. Application of ultrasonic sensor for measuring distances in robotics

    NASA Astrophysics Data System (ADS)

    Zhmud, V. A.; Kondratiev, N. O.; Kuznetsov, K. A.; Trubin, V. G.; Dimitrov, L. V.

    2018-05-01

    Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.

  7. Flexible mobile robot system for smart optical pipe inspection

    NASA Astrophysics Data System (ADS)

    Kampfer, Wolfram; Bartzke, Ralf; Ziehl, Wolfgang

    1998-03-01

    Damages of pipes can be inspected and graded by TV technology available on the market. Remotely controlled vehicles carry a TV-camera through pipes. Thus, depending on the experience and the capability of the operator, diagnosis failures can not be avoided. The classification of damages requires the knowledge of the exact geometrical dimensions of the damages such as width and depth of cracks, fractures and defect connections. Within the framework of a joint R&D project a sensor based pipe inspection system named RODIAS has been developed with two partners from industry and research institute. It consists of a remotely controlled mobile robot which carries intelligent sensors for on-line sewerage inspection purpose. The sensor is based on a 3D-optical sensor and a laser distance sensor. The laser distance sensor is integrated in the optical system of the camera and can measure the distance between camera and object. The angle of view can be determined from the position of the pan and tilt unit. With coordinate transformations it is possible to calculate the spatial coordinates for every point of the video image. So the geometry of an object can be described exactly. The company Optimess has developed TriScan32, a special software for pipe condition classification. The user can start complex measurements of profiles, pipe displacements or crack widths simply by pressing a push-button. The measuring results are stored together with other data like verbal damage descriptions and digitized images in a data base.

  8. Fused Smart Sensor Network for Multi-Axis Forward Kinematics Estimation in Industrial Robots

    PubMed Central

    Rodriguez-Donate, Carlos; Osornio-Rios, Roque Alfredo; Rivera-Guillen, Jesus Rooney; de Jesus Romero-Troncoso, Rene

    2011-01-01

    Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of high-computational load, which avoids the online measurement of the joint’s angular position and the online forward kinematics estimation. The contribution of this work is to propose a fused smart sensor network to estimate the forward kinematics of an industrial robot. The developed smart processor uses Kalman filters to filter and to fuse the information of the sensor network. Two primary sensors are used: an optical encoder, and a 3-axis accelerometer. In order to obtain the position and orientation of each joint online a field-programmable gate array (FPGA) is used in the hardware implementation taking advantage of the parallel computation capabilities and reconfigurability of this device. With the aim of evaluating the smart sensor network performance, three real-operation-oriented paths are executed and monitored in a 6-degree of freedom robot. PMID:22163850

  9. Human-like Compliance for Dexterous Robot Hands

    NASA Technical Reports Server (NTRS)

    Jau, Bruno M.

    1995-01-01

    This paper describes the Active Electromechanical Compliance (AEC) system that was developed for the Jau-JPL anthropomorphic robot. The AEC system imitates the functionality of the human muscle's secondary function, which is to control the joint's stiffness: AEC is implemented through servo controlling the joint drive train's stiffness. The control strategy, controlling compliant joints in teleoperation, is described. It enables automatic hybrid position and force control through utilizing sensory feedback from joint and compliance sensors. This compliant control strategy is adaptable for autonomous robot control as well. Active compliance enables dual arm manipulations, human-like soft grasping by the robot hand, and opens the way to many new robotics applications.

  10. A magneto-sensitive skin for robots in space

    NASA Technical Reports Server (NTRS)

    Chauhan, D. S.; Dehoff, P. H.

    1991-01-01

    The development of a robot arm proximity sensing skin that can sense intruding objects is described. The purpose of the sensor would be to prevent the robot from colliding with objects in space including human beings. Eventually a tri-mode system in envisioned including proximity, tactile, and thermal. To date the primary emphasis was on the proximity sensor which evolved from one based on magneto-inductive principles to the current design which is based on a capacitive-reflector system. The capacitive sensing element, backed by a reflector driven at the same voltage and in phase with the sensor, is used to reflect field lines away from the grounded robot toward the intruding object. This results in an increased sensing range of up to 12 in. with the reflector on compared with only 1 in. with it off. It is believed that this design advances the state-of-the-art in capacitive sensor performance.

  11. The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Han, Baoling

    2016-11-01

    The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.

  12. Robotic and artificial intelligence for keyhole neurosurgery: the ROBOCAST project, a multi-modal autonomous path planner.

    PubMed

    De Momi, E; Ferrigno, G

    2010-01-01

    The robot and sensors integration for computer-assisted surgery and therapy (ROBOCAST) project (FP7-ICT-2007-215190) is co-funded by the European Union within the Seventh Framework Programme in the field of information and communication technologies. The ROBOCAST project focuses on robot- and artificial-intelligence-assisted keyhole neurosurgery (tumour biopsy and local drug delivery along straight or turning paths). The goal of this project is to assist surgeons with a robotic system controlled by an intelligent high-level controller (HLC) able to gather and integrate information from the surgeon, from diagnostic images, and from an array of on-field sensors. The HLC integrates pre-operative and intra-operative diagnostics data and measurements, intelligence augmentation, multiple-robot dexterity, and multiple sensory inputs in a closed-loop cooperating scheme including a smart interface for improved haptic immersion and integration. This paper, after the overall architecture description, focuses on the intelligent trajectory planner based on risk estimation and human criticism. The current status of development is reported, and first tests on the planner are shown by using a real image stack and risk descriptor phantom. The advantages of using a fuzzy risk description are given by the possibility of upgrading the knowledge on-field without the intervention of a knowledge engineer.

  13. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications †

    PubMed Central

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Muroyama, Masanori

    2017-01-01

    Robot tactile sensation can enhance human–robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as “sensor platform LSI”) as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated. PMID:29061954

  14. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications.

    PubMed

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Nonomura, Yutaka; Muroyama, Masanori

    2017-08-28

    Robot tactile sensation can enhance human-robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as "sensor platform LSI") as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated.

  15. Insect-Inspired Optical-Flow Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven

    2005-01-01

    Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.

  16. Cooperative crossing of traffic intersections in a distributed robot system

    NASA Astrophysics Data System (ADS)

    Rausch, Alexander; Oswald, Norbert; Levi, Paul

    1995-09-01

    In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.

  17. Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot

    NASA Astrophysics Data System (ADS)

    Oh, Sung J.; Hall, Ernest L.

    1989-09-01

    The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.

  18. Electromagnetic Sensor Arrays for Nondestructive Evaluation and Robot Control.

    DTIC Science & Technology

    1985-10-31

    flux change for its sensitivity. Instead, it measures the magnetic field itself by using the magnetoresistive effect in a thin film of permalloy ( NiFe ...inductive sensor arrays. Besides devices employing high-permeability magnetic films, this survey also included those based on magneto- resistance and the...Survey.......................7 1. Thin-Film Magnetic Head.................7 2. Thin-Film Magnetoresistive Head ............. 10 3. Summary and

  19. Adaptive and predictive control of a simulated robot arm.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Garrido, Jesús A; Luque, Niceto R; Ros, Eduardo

    2013-06-01

    In this work, a basic cerebellar neural layer and a machine learning engine are embedded in a recurrent loop which avoids dealing with the motor error or distal error problem. The presented approach learns the motor control based on available sensor error estimates (position, velocity, and acceleration) without explicitly knowing the motor errors. The paper focuses on how to decompose the input into different components in order to facilitate the learning process using an automatic incremental learning model (locally weighted projection regression (LWPR) algorithm). LWPR incrementally learns the forward model of the robot arm and provides the cerebellar module with optimal pre-processed signals. We present a recurrent adaptive control architecture in which an adaptive feedback (AF) controller guarantees a precise, compliant, and stable control during the manipulation of objects. Therefore, this approach efficiently integrates a bio-inspired module (cerebellar circuitry) with a machine learning component (LWPR). The cerebellar-LWPR synergy makes the robot adaptable to changing conditions. We evaluate how this scheme scales for robot-arms of a high number of degrees of freedom (DOFs) using a simulated model of a robot arm of the new generation of light weight robots (LWRs).

  20. Path planning in GPS-denied environments via collective intelligence of distributed sensor networks

    NASA Astrophysics Data System (ADS)

    Jha, Devesh K.; Chattopadhyay, Pritthi; Sarkar, Soumik; Ray, Asok

    2016-05-01

    This paper proposes a framework for reactive goal-directed navigation without global positioning facilities in unknown dynamic environments. A mobile sensor network is used for localising regions of interest for path planning of an autonomous mobile robot. The underlying theory is an extension of a generalised gossip algorithm that has been recently developed in a language-measure-theoretic setting. The algorithm has been used to propagate local decisions of target detection over a mobile sensor network and thus, it generates a belief map for the detected target over the network. In this setting, an autonomous mobile robot may communicate only with a few mobile sensing nodes in its own neighbourhood and localise itself relative to the communicating nodes with bounded uncertainties. The robot makes use of the knowledge based on the belief of the mobile sensors to generate a sequence of way-points, leading to a possible goal. The estimated way-points are used by a sampling-based motion planning algorithm to generate feasible trajectories for the robot. The proposed concept has been validated by numerical simulation on a mobile sensor network test-bed and a Dubin's car-like robot.

  1. A Mobile Sensor Network System for Monitoring of Unfriendly Environments.

    PubMed

    Song, Guangming; Zhou, Yaoxin; Ding, Fei; Song, Aiguo

    2008-11-14

    Observing microclimate changes is one of the most popular applications of wireless sensor networks. However, some target environments are often too dangerous or inaccessible to humans or large robots and there are many challenges for deploying and maintaining wireless sensor networks in those unfriendly environments. This paper presents a mobile sensor network system for solving this problem. The system architecture, the mobile node design, the basic behaviors and advanced network capabilities have been investigated respectively. A wheel-based robotic node architecture is proposed here that can add controlled mobility to wireless sensor networks. A testbed including some prototype nodes has also been created for validating the basic functions of the proposed mobile sensor network system. Motion performance tests have been done to get the positioning errors and power consumption model of the mobile nodes. Results of the autonomous deployment experiment show that the mobile nodes can be distributed evenly into the previously unknown environments. It provides powerful support for network deployment and maintenance and can ensure that the sensor network will work properly in unfriendly environments.

  2. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  3. Evolutionary online behaviour learning and adaptation in real robots

    PubMed Central

    Correia, Luís; Christensen, Anders Lyhne

    2017-01-01

    Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130

  4. Parallel robot for micro assembly with integrated innovative optical 3D-sensor

    NASA Astrophysics Data System (ADS)

    Hesselbach, Juergen; Ispas, Diana; Pokar, Gero; Soetebier, Sven; Tutsch, Rainer

    2002-10-01

    Recent advances in the fields of MEMS and MOEMS often require precise assembly of very small parts with an accuracy of a few microns. In order to meet this demand, a new approach using a robot based on parallel mechanisms in combination with a novel 3D-vision system has been chosen. The planar parallel robot structure with 2 DOF provides a high resolution in the XY-plane. It carries two additional serial axes for linear and rotational movement in/about z direction. In order to achieve high precision as well as good dynamic capabilities, the drive concept for the parallel (main) axes incorporates air bearings in combination with a linear electric servo motors. High accuracy position feedback is provided by optical encoders with a resolution of 0.1 μm. To allow for visualization and visual control of assembly processes, a camera module fits into the hollow tool head. It consists of a miniature CCD camera and a light source. In addition a modular gripper support is integrated into the tool head. To increase the accuracy a control loop based on an optoelectronic sensor will be implemented. As a result of an in-depth analysis of different approaches a photogrammetric system using one single camera and special beam-splitting optics was chosen. A pattern of elliptical marks is applied to the surfaces of workpiece and gripper. Using a model-based recognition algorithm the image processing software identifies the gripper and the workpiece and determines their relative position. A deviation vector is calculated and fed into the robot control to guide the gripper.

  5. Integrated multi-sensor package (IMSP) for unmanned vehicle operations

    NASA Astrophysics Data System (ADS)

    Crow, Eddie C.; Reichard, Karl; Rogan, Chris; Callen, Jeff; Seifert, Elwood

    2007-10-01

    This paper describes recent efforts to develop integrated multi-sensor payloads for small robotic platforms for improved operator situational awareness and ultimately for greater robot autonomy. The focus is on enhancements to perception through integration of electro-optic, acoustic, and other sensors for navigation and inspection. The goals are to provide easier control and operation of the robot through fusion of multiple sensor outputs, to improve interoperability of the sensor payload package across multiple platforms through the use of open standards and architectures, and to reduce integration costs by embedded sensor data processing and fusion within the sensor payload package. The solutions investigated in this project to be discussed include: improved capture, processing and display of sensor data from multiple, non-commensurate sensors; an extensible architecture to support plug and play of integrated sensor packages; built-in health, power and system status monitoring using embedded diagnostics/prognostics; sensor payload integration into standard product forms for optimized size, weight and power; and the use of the open Joint Architecture for Unmanned Systems (JAUS)/ Society of Automotive Engineers (SAE) AS-4 interoperability standard. This project is in its first of three years. This paper will discuss the applicability of each of the solutions in terms of its projected impact to reducing operational time for the robot and teleoperator.

  6. A close inspection and vibration sensing aerial robot for steel structures with an EPM-based landing device

    NASA Astrophysics Data System (ADS)

    Takeuchi, Kazuya; Masuda, Arata; Akahori, Shunsuke; Higashi, Yoshiyuki; Miura, Nanako

    2017-04-01

    This paper proposes an aerial robot that can land on and cling to a steel structure using electric permanent magnets to be- have as a vibration sensor probe for use in vibration-based structural health monitoring. In the last decade, structural health monitoring techniques have been studied intensively to tackle with serious social issues that most of the infrastructures in advanced countries are being deteriorated. In the typical concept of the structural health monitoring, vibration sensors like accelerometers are installed in the structure to continuously collect the dynamical response of the operating structure to find a symptom of the structural damage. It is unreasonable, however, to permanently deploy the sensors to numerous infrastructures because most of the infrastructures except for those of primary importance do not need continuous measurement and evaluation. In this study, the aerial robot plays a role of a mobile detachable sensor unit. The design guidelines of the aerial robot that performs the vibration measurement from the analysis model of the robot is shown. Experiments to evaluate the frequency response function of the acceleration measured by the robot with respect to the acceleration at the point where the robot adheres are carried out. And the experimental results show that the prototype robot can measure the acceleration of the host structure accurately up to 150 Hz.

  7. Thermal Image Sensing Model for Robotic Planning and Search

    PubMed Central

    Castro Jiménez, Lídice E.; Martínez-García, Edgar A.

    2016-01-01

    This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image’s intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot’s course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach. PMID:27509510

  8. Observability-Based Guidance and Sensor Placement

    NASA Astrophysics Data System (ADS)

    Hinson, Brian T.

    Control system performance is highly dependent on the quality of sensor information available. In a growing number of applications, however, the control task must be accomplished with limited sensing capabilities. This thesis addresses these types of problems from a control-theoretic point-of-view, leveraging system nonlinearities to improve sensing performance. Using measures of observability as an information quality metric, guidance trajectories and sensor distributions are designed to improve the quality of sensor information. An observability-based sensor placement algorithm is developed to compute optimal sensor configurations for a general nonlinear system. The algorithm utilizes a simulation of the nonlinear system as the source of input data, and convex optimization provides a scalable solution method. The sensor placement algorithm is applied to a study of gyroscopic sensing in insect wings. The sensor placement algorithm reveals information-rich areas on flexible insect wings, and a comparison to biological data suggests that insect wings are capable of acting as gyroscopic sensors. An observability-based guidance framework is developed for robotic navigation with limited inertial sensing. Guidance trajectories and algorithms are developed for range-only and bearing-only navigation that improve navigation accuracy. Simulations and experiments with an underwater vehicle demonstrate that the observability measure allows tuning of the navigation uncertainty.

  9. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  10. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  11. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    PubMed Central

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930

  12. Estimation of visual maps with a robot network equipped with vision sensors.

    PubMed

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  13. Extending the Capability of Mars Umbilical Technology Demonstrator

    NASA Technical Reports Server (NTRS)

    Houshangi, Nasser

    2001-01-01

    The objective of this project is to expand the capabilities of for the Mars Umbilical Technology Demonstrator (MUTD). The MUTD shall provide electrical power and fiber optic data cable connections between two simulated mars vehicles, 1000 in apart. ne wheeled mobile robot Omnibot is used to provide the mobile base for the system. The mate-to umbilical plate is mounted on a Cartesian robot, which is installed on the Omnibot mobile base. It is desirable to provide the operator controlling the Omnibot, the distance and direction to the target. In this report, an approach for finding the position and orientation of the mobile robot using inertial sensors and beacons is investigated. First phase of the project considered the Omnibot being on the flat surface. To deal with the uneven Mars environment, the orientation as well as position needs to be controlled. During local positioning, the information received from four ultrasonic sensors installed at the four corner of the mate-mi plate is used to identify the position of mate-to plate and mate the umbilical plates autonomously. The work proposed is the continuation of the principal investigator research effort as a participant in the 1999 NASA/ASEE Summer Faculty Fellowship Program.

  14. Dexterous programmable robot and control system

    NASA Astrophysics Data System (ADS)

    Engler, Charles D., Jr.

    1995-09-01

    An anatomically correct, humanlike, mechanical arm and hand is provided that an operator can control to perform with the dexterity and compliance of a human hand. Being humanlike and robotic enhances the device's control and gripper dexterity. Control of the movement of the arm and hand is performed or guided by a 'teach glove' worn by the operator. As he or she performs some hand manipulation, a controller stores signals from sensors on the exoskeleton. The sensors monitor the operator's finger-joint movement positions. These values are later translated into actuator control signals for servomotors, eventually duplicating the operator's movement.

  15. Teaching and implementing autonomous robotic lab walkthroughs in a biotech laboratory through model-based visual tracking

    NASA Astrophysics Data System (ADS)

    Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois

    2010-01-01

    After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.

  16. Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Warren

    2004-06-01

    There is significant motivation to provide robotic systems with improved autonomy as a means to significantly accelerate deactivation and decommissioning (D&D) operations while also reducing the associated costs, removing human operators from hazardous environments, and reducing the required burden and skill of human operators. To achieve improved autonomy, this project focused on the basic science challenges leading to the development of visual servo controllers. The challenge in developing these controllers is that a camera provides 2-dimensional image information about the 3-dimensional Euclidean-space through a perspective (range dependent) projection that can be corrupted by uncertainty in the camera calibration matrix andmore » by disturbances such as nonlinear radial distortion. Disturbances in this relationship (i.e., corruption in the sensor information) propagate erroneous information to the feedback controller of the robot, leading to potentially unpredictable task execution. This research project focused on the development of a visual servo control methodology that targets compensating for disturbances in the camera model (i.e., camera calibration and the recovery of range information) as a means to achieve predictable response by the robotic system operating in unstructured environments. The fundamental idea is to use nonlinear Lyapunov-based techniques along with photogrammetry methods to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current robotic applications. The outcome of this control methodology is a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature recognition and extraction to enable robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). The developed methodology has been reported in numerous peer-reviewed publications and the performance and enabling capabilities of the resulting visual servo control modules have been demonstrated on mobile robot and robot manipulator platforms.« less

  17. A New Localization System for Indoor Service Robots in Low Luminance and Slippery Indoor Environment Using Afocal Optical Flow Sensor Based Sensor Fusion.

    PubMed

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan

    2018-01-10

    In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively.

  18. A New Localization System for Indoor Service Robots in Low Luminance and Slippery Indoor Environment Using Afocal Optical Flow Sensor Based Sensor Fusion

    PubMed Central

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il “Dan”

    2018-01-01

    In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively. PMID:29320414

  19. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  20. Cloud-Based Perception and Control of Sensor Nets and Robot Swarms

    DTIC Science & Technology

    2016-04-01

    distributed stream processing framework provides the necessary API and infrastructure to develop and execute such applications in a cluster of computation...streaming DDDAS applications based on challenges they present to the backend Cloud control system. Figure 2 Parallel SLAM Application 3 1) Set of...the art deep learning- based object detectors can recognize among hundreds of object classes and this capability would be very useful for mobile

  1. Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation

    PubMed Central

    Masmoudi, Mohamed Slim; Masmoudi, Mohamed

    2016-01-01

    This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path. PMID:27688748

  2. Hybrid position and orientation tracking for a passive rehabilitation table-top robot.

    PubMed

    Wojewoda, K K; Culmer, P R; Gallagher, J F; Jackson, A E; Levesley, M C

    2017-07-01

    This paper presents a real time hybrid 2D position and orientation tracking system developed for an upper limb rehabilitation robot. Designed to work on a table-top, the robot is to enable home-based upper-limb rehabilitative exercise for stroke patients. Estimates of the robot's position are computed by fusing data from two tracking systems, each utilizing a different sensor type: laser optical sensors and a webcam. Two laser optical sensors are mounted on the underside of the robot and track the relative motion of the robot with respect to the surface on which it is placed. The webcam is positioned directly above the workspace, mounted on a fixed stand, and tracks the robot's position with respect to a fixed coordinate system. The optical sensors sample the position data at a higher frequency than the webcam, and a position and orientation fusion scheme is proposed to fuse the data from the two tracking systems. The proposed fusion scheme is validated through an experimental set-up whereby the rehabilitation robot is moved by a humanoid robotic arm replicating previously recorded movements of a stroke patient. The results prove that the presented hybrid position tracking system can track the position and orientation with greater accuracy than the webcam or optical sensors alone. The results also confirm that the developed system is capable of tracking recovery trends during rehabilitation therapy.

  3. Bioinspired Intelligent Algorithm and Its Applications for Mobile Robot Control: A Survey.

    PubMed

    Ni, Jianjun; Wu, Liuying; Fan, Xinnan; Yang, Simon X

    2016-01-01

    Bioinspired intelligent algorithm (BIA) is a kind of intelligent computing method, which is with a more lifelike biological working mechanism than other types. BIAs have made significant progress in both understanding of the neuroscience and biological systems and applying to various fields. Mobile robot control is one of the main application fields of BIAs which has attracted more and more attention, because mobile robots can be used widely and general artificial intelligent algorithms meet a development bottleneck in this field, such as complex computing and the dependence on high-precision sensors. This paper presents a survey of recent research in BIAs, which focuses on the research in the realization of various BIAs based on different working mechanisms and the applications for mobile robot control, to help in understanding BIAs comprehensively and clearly. The survey has four primary parts: a classification of BIAs from the biomimetic mechanism, a summary of several typical BIAs from different levels, an overview of current applications of BIAs in mobile robot control, and a description of some possible future directions for research.

  4. Bioinspired Intelligent Algorithm and Its Applications for Mobile Robot Control: A Survey

    PubMed Central

    Ni, Jianjun; Wu, Liuying; Fan, Xinnan; Yang, Simon X.

    2016-01-01

    Bioinspired intelligent algorithm (BIA) is a kind of intelligent computing method, which is with a more lifelike biological working mechanism than other types. BIAs have made significant progress in both understanding of the neuroscience and biological systems and applying to various fields. Mobile robot control is one of the main application fields of BIAs which has attracted more and more attention, because mobile robots can be used widely and general artificial intelligent algorithms meet a development bottleneck in this field, such as complex computing and the dependence on high-precision sensors. This paper presents a survey of recent research in BIAs, which focuses on the research in the realization of various BIAs based on different working mechanisms and the applications for mobile robot control, to help in understanding BIAs comprehensively and clearly. The survey has four primary parts: a classification of BIAs from the biomimetic mechanism, a summary of several typical BIAs from different levels, an overview of current applications of BIAs in mobile robot control, and a description of some possible future directions for research. PMID:26819582

  5. Robust Agent Control of an Autonomous Robot with Many Sensors and Actuators

    DTIC Science & Technology

    1993-05-01

    Overview 22 3.1 Issues of Controller Design ........................ 22 3.2 Robot Behavior Control Philosophy .................. 23 3.3 Overview of the... designed and built by our lab as an 9 Figure 1.1- Hannibal. 10 experimental platform to explore planetary micro-rover control issues (Angle 1991). When... designing the robot, careful consideration was given to mobility, sensing, and robustness issues. Much has been said concerning the advan- tages of

  6. External force/velocity control for an autonomous rehabilitation robot

    NASA Astrophysics Data System (ADS)

    Saekow, Peerayuth; Neranon, Paramin; Smithmaitrie, Pruittikorn

    2018-01-01

    Stroke is a primary cause of death and the leading cause of permanent disability in adults. There are many stroke survivors, who live with a variety of levels of disability and always need rehabilitation activities on daily basis. Several studies have reported that usage of rehabilitation robotic devices shows the better improvement outcomes in upper-limb stroke patients than the conventional therapy-nurses or therapists actively help patients with exercise-based rehabilitation. This research focuses on the development of an autonomous robotic trainer designed to guide a stroke patient through an upper-limb rehabilitation task. The robotic device was designed and developed to automate the reaching exercise as mentioned. The designed robotic system is made up of a four-wheel omni-directional mobile robot, an ATI Gamma multi-axis force/torque sensor used to measure contact force and a microcontroller real-time operating system. Proportional plus Integral control was adapted to control the overall performance and stability of the autonomous assistive robot. External force control was successfully implemented to establish the behavioral control strategy for the robot force and velocity control scheme. In summary, the experimental results indicated satisfactorily stable performance of the robot force and velocity control can be considered acceptable. The gain tuning for proportional integral (PI) velocity control algorithms was suitably estimated using the Ziegler-Nichols method in which the optimized proportional and integral gains are 0.45 and 0.11, respectively. Additionally, the PI external force control gains were experimentally tuned using the trial and error method based on a set of experiments which allow a human participant moves the robot along the constrained circular path whilst attempting to minimize the radial force. The performance was analyzed based on the root mean square error (E_RMS) of the radial forces, in which the lower the variation in radial forces, the better the performance of the system. The outstanding performance of the tests as specified by the E_RMS of the radial force was observed with proportional and integral gains of Kp = 0.7 and Ki = 0.75, respectively.

  7. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control.

  8. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  9. Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.

    PubMed

    Hongbo Wang; Kosuge, K

    2012-01-01

    Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.

  10. Continuum Reconfigurable Parallel Robots for Surgery: Shape Sensing and State Estimation with Uncertainty.

    PubMed

    Anderson, Patrick L; Mahoney, Arthur W; Webster, Robert J

    2017-07-01

    This paper examines shape sensing for a new class of surgical robot that consists of parallel flexible structures that can be reconfigured inside the human body. Known as CRISP robots, these devices provide access to the human body through needle-sized entry points, yet can be configured into truss-like structures capable of dexterous movement and large force application. They can also be reconfigured as needed during a surgical procedure. Since CRISP robots are elastic, they will deform when subjected to external forces or other perturbations. In this paper, we explore how to combine sensor information with mechanics-based models for CRISP robots to estimate their shapes under applied loads. The end result is a shape sensing framework for CRISP robots that will enable future research on control under applied loads, autonomous motion, force sensing, and other robot behaviors.

  11. Robotics and local fusion

    NASA Astrophysics Data System (ADS)

    Emmerman, Philip J.

    2005-05-01

    Teams of robots or mixed teams of warfighters and robots on reconnaissance and other missions can benefit greatly from a local fusion station. A local fusion station is defined here as a small mobile processor with interfaces to enable the ingestion of multiple heterogeneous sensor data and information streams, including blue force tracking data. These data streams are fused and integrated with contextual information (terrain features, weather, maps, dynamic background features, etc.), and displayed or processed to provide real time situational awareness to the robot controller or to the robots themselves. These blue and red force fusion applications remove redundancies, lessen ambiguities, correlate, aggregate, and integrate sensor information with context such as high resolution terrain. Applications such as safety, team behavior, asset control, training, pattern analysis, etc. can be generated or enhanced by these fusion stations. This local fusion station should also enable the interaction between these local units and a global information world.

  12. Development of a force-reflecting robotic platform for cardiac catheter navigation.

    PubMed

    Park, Jun Woo; Choi, Jaesoon; Pak, Hui-Nam; Song, Seung Joon; Lee, Jung Chan; Park, Yongdoo; Shin, Seung Min; Sun, Kyung

    2010-11-01

    Electrophysiological catheters are used for both diagnostics and clinical intervention. To facilitate more accurate and precise catheter navigation, robotic cardiac catheter navigation systems have been developed and commercialized. The authors have developed a novel force-reflecting robotic catheter navigation system. The system is a network-based master-slave configuration having a 3-degree of freedom robotic manipulator for operation with a conventional cardiac ablation catheter. The master manipulator implements a haptic user interface device with force feedback using a force or torque signal either measured with a sensor or estimated from the motor current signal in the slave manipulator. The slave manipulator is a robotic motion control platform on which the cardiac ablation catheter is mounted. The catheter motions-forward and backward movements, rolling, and catheter tip bending-are controlled by electromechanical actuators located in the slave manipulator. The control software runs on a real-time operating system-based workstation and implements the master/slave motion synchronization control of the robot system. The master/slave motion synchronization response was assessed with step, sinusoidal, and arbitrarily varying motion commands, and showed satisfactory performance with insignificant steady-state motion error. The current system successfully implemented the motion control function and will undergo safety and performance evaluation by means of animal experiments. Further studies on the force feedback control algorithm and on an active motion catheter with an embedded actuation mechanism are underway. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  13. GelSight: High-Resolution Robot Tactile Sensors for Estimating Geometry and Force

    PubMed Central

    Yuan, Wenzhen; Dong, Siyuan; Adelson, Edward H.

    2017-01-01

    Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation. PMID:29186053

  14. Flexible Strain Sensors Fabricated by Meniscus-Guided Printing of Carbon Nanotube-Polymer Composites.

    PubMed

    Wajahat, Muhammad; Lee, Sanghyeon; Kim, Jung Hyun; Chang, Won Suk; Pyo, Jaeyeon; Cho, Sung Ho; Seol, Seung Kwon

    2018-06-13

    Printed strain sensors have promising potential as a human-machine interface (HMI) for health-monitoring systems, human-friendly wearable interactive systems, and smart robotics. Herein, flexible strain sensors based on carbon nanotube (CNT)-polymer composites were fabricated by meniscus-guided printing using a CNT ink formulated from multiwall nanotubes (MWNTs) and polyvinylpyrrolidone (PVP); the ink was suitable for micropatterning on nonflat (or curved) substrates and even three-dimensional structures. The printed strain sensors exhibit a reproducible response to applied tensile and compressive strains, having gauge factors of 13.07 under tensile strain and 12.87 under compressive strain; they also exhibit high stability during ∼1500 bending cycles. Applied strains induce a contact rearrangement of the MWNTs and a change in the tunneling distance between them, resulting in a change in the resistance (Δ R/ R 0 ) of the sensor. Printed MWNT-PVP sensors were used in gloves for finger movement detection; these can be applied to human motion detection and remote control of robotic equipment. Our results demonstrate that meniscus-guided printing using CNT inks can produce highly flexible, sensitive, and inexpensive HMI devices.

  15. Distributed flow sensing for closed-loop speed control of a flexible fish robot.

    PubMed

    Zhang, Feitian; Lagor, Francis D; Yeo, Derrick; Washington, Patrick; Paley, Derek A

    2015-10-23

    Flexibility plays an important role in fish behavior by enabling high maneuverability for predator avoidance and swimming in turbulent flow. This paper presents a novel flexible fish robot equipped with distributed pressure sensors for flow sensing. The body of the robot is molded from soft, hyperelastic material, which provides flexibility. Its Joukowski-foil shape is conducive to modeling the fluid analytically. A quasi-steady potential-flow model is adopted for real-time flow estimation, whereas a discrete-time vortex-shedding flow model is used for higher-fidelity simulation. The dynamics for the flexible fish robot yield a reduced model for one-dimensional swimming. A recursive Bayesian filter assimilates pressure measurements to estimate flow speed, angle of attack, and foil camber. The closed-loop speed-control strategy combines an inverse-mapping feedforward controller based on an average model derived for periodic actuation of angle-of-attack and a proportional-integral feedback controller utilizing the estimated flow information. Simulation and experimental results are presented to show the effectiveness of the estimation and control strategy. The paper provides a systematic approach to distributed flow sensing for closed-loop speed control of a flexible fish robot by regulating the flapping amplitude.

  16. Biobotic insect swarm based sensor networks for search and rescue

    NASA Astrophysics Data System (ADS)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  17. System For Research On Multiple-Arm Robots

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Hayati, Samad; Tso, Kam S.; Hayward, Vincent

    1991-01-01

    Kali system of computer programs and equipment provides environment for research on distributed programming and distributed control of coordinated-multiple-arm robots. Suitable for telerobotics research involving sensing and execution of low level tasks. Software and configuration of hardware designed flexible so system modified easily to test various concepts in control and programming of robots, including multiple-arm control, redundant-arm control, shared control, traded control, force control, force/position hybrid control, design and integration of sensors, teleoperation, task-space description and control, methods of adaptive control, control of flexible arms, and human factors.

  18. Six component robotic force-torque sensor

    NASA Technical Reports Server (NTRS)

    Grahn, Allen R.; Hutchings, Brad L.; Johnston, David R.; Parsons, David C.; Wyatt, Roland F.

    1987-01-01

    The results of a two-phase contract studying the feasibility of a miniaturized six component force-torque sensor and development of a working laboratory system were described. The principle of operation is based upon using ultrasonic pulse-echo ranging to determine the position of ultrasonic reflectors attached to a metal or ceramic cover plate. Because of the small size of the sensor, this technology may have application in robotics, to sense forces and torques at the finger tip of a robotic end effector. Descriptions are included of laboratory experiments evaluating materials and techniques for sensor fabrication and of the development of support electronics for data acquisition, computer interface, and operator display.

  19. Fault tolerant multi-sensor fusion based on the information gain

    NASA Astrophysics Data System (ADS)

    Hage, Joelle Al; El Najjar, Maan E.; Pomorski, Denis

    2017-01-01

    In the last decade, multi-robot systems are used in several applications like for example, the army, the intervention areas presenting danger to human life, the management of natural disasters, the environmental monitoring, exploration and agriculture. The integrity of localization of the robots must be ensured in order to achieve their mission in the best conditions. Robots are equipped with proprioceptive (encoders, gyroscope) and exteroceptive sensors (Kinect). However, these sensors could be affected by various faults types that can be assimilated to erroneous measurements, bias, outliers, drifts,… In absence of a sensor fault diagnosis step, the integrity and the continuity of the localization are affected. In this work, we present a muti-sensors fusion approach with Fault Detection and Exclusion (FDE) based on the information theory. In this context, we are interested by the information gain given by an observation which may be relevant when dealing with the fault tolerance aspect. Moreover, threshold optimization based on the quantity of information given by a decision on the true hypothesis is highlighted.

  20. FBG-based sensorized light pipe for robotic intraocular illumination facilitates bimanual retinal microsurgery.

    PubMed

    Horise, Yuki; He, Xingchi; Gehlbach, Peter; Taylor, Russell; Iordachita, Iulian

    2015-01-01

    In retinal surgery, microsurgical instruments such as micro forceps, scissors and picks are inserted through the eye wall via sclerotomies. A handheld intraocular light source is typically used to visualize the tools during the procedure. Retinal surgery requires precise and stable tool maneuvers as the surgical targets are micro scale, fragile and critical to function. Retinal surgeons typically control an active surgical tool with one hand and an illumination source with the other. In this paper, we present a "smart" light pipe that enables true bimanual surgery via utilization of an active, robot-assisted source of targeted illumination. The novel sensorized smart light pipe measures the contact force between the sclerotomy and its own shaft, thereby accommodating the motion of the patient's eye. Forces at the point of contact with the sclera are detected by fiber Bragg grating (FBG) sensors on the light pipe. Our calibration and validation results demonstrate reliable measurement of the contact force as well as location of the sclerotomy. Preliminary experiments have been conducted to functionally evaluate robotic intraocular illumination.

  1. Experimental validation of flexible robot arm modeling and control

    NASA Technical Reports Server (NTRS)

    Ulsoy, A. Galip

    1989-01-01

    Flexibility is important for high speed, high precision operation of lightweight manipulators. Accurate dynamic modeling of flexible robot arms is needed. Previous work has mostly been based on linear elasticity with prescribed rigid body motions (i.e., no effect of flexible motion on rigid body motion). Little or no experimental validation of dynamic models for flexible arms is available. Experimental results are also limited for flexible arm control. Researchers include the effects of prismatic as well as revolute joints. They investigate the effect of full coupling between the rigid and flexible motions, and of axial shortening, and consider the control of flexible arms using only additional sensors.

  2. Enhanced control & sensing for the REMOTEC ANDROS Mk VI robot. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spelt, P.F.; Harvey, H.W.

    1997-08-01

    This Cooperative Research and Development Agreement (CRADA) between Lockheed Marietta Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less

  3. A Demonstrator Intelligent Scheduler For Sensor-Based Robots

    NASA Astrophysics Data System (ADS)

    Perrotta, Gabriella; Allen, Charles R.; Shepherd, Andrew J.

    1987-10-01

    The development of an execution module capable of functioning as as on-line supervisor for a robot equipped with a vision sensor and tactile sensing gripper system is described. The on-line module is supported by two off-line software modules which provide a procedural based assembly constraints language to allow the assembly task to be defined. This input is then converted into a normalised and minimised form. The host Robot programming language permits high level motions to be issued at the to level, hence allowing a low programming overhead to the designer, who must describe the assembly sequence. Components are selected for pick and place robot movement, based on information derived from two cameras, one static and the other mounted on the end effector of the robot. The approach taken is multi-path scheduling as described by Fox pi. The system is seen to permit robot assembly in a less constrained parts presentation environment making full use of the sensory detail available on the robot.

  4. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  5. Decentralized sensor fusion for Ubiquitous Networking Robotics in Urban Areas.

    PubMed

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T J

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted.

  6. Force-sensed interface for control and training space robot

    NASA Astrophysics Data System (ADS)

    Moiseev, O. S.; Sarsadskikh, A. S.; Povalyaev, N. D.; Gorbunov, V. I.; Kulakov, F. M.; Vasilev, V. V.

    2018-05-01

    A method of positional and force-torque control of robots is proposed. Prototypes of the system and the master handle have been created. Algorithm of bias estimation and gravity compensation for force-torque sensor and force-torque trajectory correction are described.

  7. Planning in subsumption architectures

    NASA Technical Reports Server (NTRS)

    Chalfant, Eugene C.

    1994-01-01

    A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem.

  8. Proceedings of the NASA Conference on Space Telerobotics, volume 4

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor); Seraji, Homayoun (Editor)

    1989-01-01

    Papers presented at the NASA Conference on Space Telerobotics are compiled. The theme of the conference was man-machine collaboration in space. The conference provided a forum for researchers and engineers to exchange ideas on the research and development required for the application of telerobotic technology to the space systems planned for the 1990's and beyond. Volume 4 contains papers related to the following subject areas: manipulator control; telemanipulation; flight experiments (systems and simulators); sensor-based planning; robot kinematics, dynamics, and control; robot task planning and assembly; and research activities at the NASA Langley Research Center.

  9. Self-evaluation on Motion Adaptation for Service Robots

    NASA Astrophysics Data System (ADS)

    Funabora, Yuki; Yano, Yoshikazu; Doki, Shinji; Okuma, Shigeru

    We suggest self motion evaluation method to adapt to environmental changes for service robots. Several motions such as walking, dancing, demonstration and so on are described with time series patterns. These motions are optimized with the architecture of the robot and under certain surrounding environment. Under unknown operating environment, robots cannot accomplish their tasks. We propose autonomous motion generation techniques based on heuristic search with histories of internal sensor values. New motion patterns are explored under unknown operating environment based on self-evaluation. Robot has some prepared motions which realize the tasks under the designed environment. Internal sensor values observed under the designed environment with prepared motions show the interaction results with the environment. Self-evaluation is composed of difference of internal sensor values between designed environment and unknown operating environment. Proposed method modifies the motions to synchronize the interaction results on both environment. New motion patterns are generated to maximize self-evaluation function without external information, such as run length, global position of robot, human observation and so on. Experimental results show that the possibility to adapt autonomously patterned motions to environmental changes.

  10. Addressing the Movement of a Freescale Robotic Car Using Neural Network

    NASA Astrophysics Data System (ADS)

    Horváth, Dušan; Cuninka, Peter

    2016-12-01

    This article deals with the management of a Freescale small robotic car along the predefined guide line. Controlling of the direction of movement of the robot is performed by neural networks, and scales (memory) of neurons are calculated by Hebbian learning from the truth tables as learning with a teacher. Reflexive infrared sensors serves as inputs. The results are experiments, which are used to compare two methods of mobile robot control - tracking lines.

  11. Autonomous Lawnmower using FPGA implementation.

    NASA Astrophysics Data System (ADS)

    Ahmad, Nabihah; Lokman, Nabill bin; Helmy Abd Wahab, Mohd

    2016-11-01

    Nowadays, there are various types of robot have been invented for multiple purposes. The robots have the special characteristic that surpass the human ability and could operate in extreme environment which human cannot endure. In this paper, an autonomous robot is built to imitate the characteristic of a human cutting grass. A Field Programmable Gate Array (FPGA) is used to control the movements where all data and information would be processed. Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) is used to describe the hardware using Quartus II software. This robot has the ability of avoiding obstacle using ultrasonic sensor. This robot used two DC motors for its movement. It could include moving forward, backward, and turning left and right. The movement or the path of the automatic lawn mower is based on a path planning technique. Four Global Positioning System (GPS) plot are set to create a boundary. This to ensure that the lawn mower operates within the area given by user. Every action of the lawn mower is controlled by the FPGA DE' Board Cyclone II with the help of the sensor. Furthermore, Sketch Up software was used to design the structure of the lawn mower. The autonomous lawn mower was able to operate efficiently and smoothly return to coordinated paths after passing the obstacle. It uses 25% of total pins available on the board and 31% of total Digital Signal Processing (DSP) blocks.

  12. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  13. 2015 Marine Corps Security Environment Forecast: Futures 2030-2045

    DTIC Science & Technology

    2015-01-01

    The technologies that make the iPhone “smart” were publically funded—the Internet, wireless networks, the global positioning system, microelectronics...Energy Revolution (63 percent);  Internet of Things (ubiquitous sensors embedded in interconnected computing devices) (50 percent);  “Sci-Fi...Neuroscience & artificial intelligence - Sensors /control systems -Power & energy -Human-robot interaction Robots/autonomous systems will become part of the

  14. Robots, systems, and methods for hazard evaluation and visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.

    A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less

  15. Development of a biomimetic roughness sensor for tactile information with an elastomer

    NASA Astrophysics Data System (ADS)

    Choi, Jae-Young; Kim, Sung Joon; Moon, Hyungpil; Choi, Hyouk Ryeol; Koo, Ja Choon

    2016-04-01

    Human uses various sensational information for identifying an object. When contacting an unidentified object with no vision, tactile sensation provides a variety of information to perceive. Tactile sensation plays an important role to recognize a shape of surfaces from touching. In robotic fields, tactile sensation is especially meaningful. Robots can perform more accurate job using comprehensive tactile information. And in case of using sensors made by soft material like silicone, sensors can be used in various situations. So we are developing a tactile sensor with soft materials. As the conventional robot operates in a controlled environment, it is a good model to make robots more available at any circumstance that sensory systems of living things. For example, there are lots of mechanoreceptors that each of them has different roles detecting simulation in side of human skin tissue. By mimicking the mechanoreceptor, a sensory system can be realized more closely to human being. It is known that human obtains roughness information through scanning the surface with fingertips. During that times, subcutaneous mechanoreceptors detect vibration. In the same way, while a robot is scanning a surface of object, a roughness sensor developed detects vibrations generated between contacting two surfaces. In this research, a roughness sensor made by an elastomer was developed and experiment for perception of objects was conducted. We describe means to compare the roughness of objects with a newly developed sensor.

  16. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    NASA Astrophysics Data System (ADS)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  17. Creature co-op: Achieving robust remote operations with a community of low-cost robots

    NASA Technical Reports Server (NTRS)

    Bonasso, R. Peter

    1990-01-01

    The concept is advanced of carrying out space based remote missions using a cooperative of low cost robot specialists rather than monolithic, multipurpose systems. A simulation is described wherein a control architecture for such a system of specialists is being investigated. Early results show such co-ops to be robust in the face of unforeseen circumstances. Descriptions of the platforms and sensors modeled and the beacon and retriever creatures that make up the co-op are included.

  18. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    NASA Astrophysics Data System (ADS)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  19. Robot arm system for automatic satellite capture and berthing

    NASA Technical Reports Server (NTRS)

    Nishida, Shinichiro; Toriu, Hidetoshi; Hayashi, Masato; Kubo, Tomoaki; Miyata, Makoto

    1994-01-01

    Load control is one of the most important technologies for capturing and berthing free flying satellites by a space robot arm because free flying satellites have different motion rates. The performance of active compliance control techniques depend on the location of the force sensor and the arm's structural compliance. A compliance control technique for the robot arm's structural elasticity and a consideration for an end-effector appropriate for it are presented in this paper.

  20. Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas

    PubMed Central

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M.; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T.J.

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted. PMID:22294927

  1. Ground robotic measurement of aeolian processes

    NASA Astrophysics Data System (ADS)

    Qian, Feifei; Jerolmack, Douglas; Lancaster, Nicholas; Nikolich, George; Reverdy, Paul; Roberts, Sonia; Shipley, Thomas; Van Pelt, R. Scott; Zobeck, Ted M.; Koditschek, Daniel E.

    2017-08-01

    Models of aeolian processes rely on accurate measurements of the rates of sediment transport by wind, and careful evaluation of the environmental controls of these processes. Existing field approaches typically require intensive, event-based experiments involving dense arrays of instruments. These devices are often cumbersome and logistically difficult to set up and maintain, especially near steep or vegetated dune surfaces. Significant advances in instrumentation are needed to provide the datasets that are required to validate and improve mechanistic models of aeolian sediment transport. Recent advances in robotics show great promise for assisting and amplifying scientists' efforts to increase the spatial and temporal resolution of many environmental measurements governing sediment transport. The emergence of cheap, agile, human-scale robotic platforms endowed with increasingly sophisticated sensor and motor suites opens up the prospect of deploying programmable, reactive sensor payloads across complex terrain in the service of aeolian science. This paper surveys the need and assesses the opportunities and challenges for amassing novel, highly resolved spatiotemporal datasets for aeolian research using partially-automated ground mobility. We review the limitations of existing measurement approaches for aeolian processes, and discuss how they may be transformed by ground-based robotic platforms, using examples from our initial field experiments. We then review how the need to traverse challenging aeolian terrains and simultaneously make high-resolution measurements of critical variables requires enhanced robotic capability. Finally, we conclude with a look to the future, in which robotic platforms may operate with increasing autonomy in harsh conditions. Besides expanding the completeness of terrestrial datasets, bringing ground-based robots to the aeolian research community may lead to unexpected discoveries that generate new hypotheses to expand the science itself.

  2. Optimal accelerometer placement on a robot arm for pose estimation

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Sanford, Joseph D.; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Das, Sumit K.; Popa, Dan O.

    2017-05-01

    The performance of robots to carry out tasks depends in part on the sensor information they can utilize. Usually, robots are fitted with angle joint encoders that are used to estimate the position and orientation (or the pose) of its end-effector. However, there are numerous situations, such as in legged locomotion, mobile manipulation, or prosthetics, where such joint sensors may not be present at every, or any joint. In this paper we study the use of inertial sensors, in particular accelerometers, placed on the robot that can be used to estimate the robot pose. Studying accelerometer placement on a robot involves many parameters that affect the performance of the intended positioning task. Parameters such as the number of accelerometers, their size, geometric placement and Signal-to-Noise Ratio (SNR) are included in our study of their effects for robot pose estimation. Due to the ubiquitous availability of inexpensive accelerometers, we investigated pose estimation gains resulting from using increasingly large numbers of sensors. Monte-Carlo simulations are performed with a two-link robot arm to obtain the expected value of an estimation error metric for different accelerometer configurations, which are then compared for optimization. Results show that, with a fixed SNR model, the pose estimation error decreases with increasing number of accelerometers, whereas for a SNR model that scales inversely to the accelerometer footprint, the pose estimation error increases with the number of accelerometers. It is also shown that the optimal placement of the accelerometers depends on the method used for pose estimation. The findings suggest that an integration-based method favors placement of accelerometers at the extremities of the robot links, whereas a kinematic-constraints-based method favors a more uniformly distributed placement along the robot links.

  3. An Architecture for Controlling Multiple Robots

    NASA Technical Reports Server (NTRS)

    Aghazarian, Hrand; Pirjanian, Paolo; Schenker, Paul; Huntsberger, Terrance

    2004-01-01

    The Control Architecture for Multirobot Outpost (CAMPOUT) is a distributed-control architecture for coordinating the activities of multiple robots. In the CAMPOUT, multiple-agent activities and sensor-based controls are derived as group compositions and involve coordination of more basic controllers denoted, for present purposes, as behaviors. The CAMPOUT provides basic mechanistic concepts for representation and execution of distributed group activities. One considers a network of nodes that comprise behaviors (self-contained controllers) augmented with hyper-links, which are used to exchange information between the nodes to achieve coordinated activities. Group behavior is guided by a scripted plan, which encodes a conditional sequence of single-agent activities. Thus, higher-level functionality is composed by coordination of more basic behaviors under the downward task decomposition of a multi-agent planner

  4. DEVS representation of dynamical systems - Event-based intelligent control. [Discrete Event System Specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.

    1989-01-01

    It is shown how systems can be advantageously represented as discrete-event models by using DEVS (discrete-event system specification), a set-theoretic formalism. Such DEVS models provide a basis for the design of event-based logic control. In this control paradigm, the controller expects to receive confirming sensor responses to its control commands within definite time windows determined by its DEVS model of the system under control. The event-based contral paradigm is applied in advanced robotic and intelligent automation, showing how classical process control can be readily interfaced with rule-based symbolic reasoning systems.

  5. SafeNet: a methodology for integrating general-purpose unsafe devices in safe-robot rehabilitation systems.

    PubMed

    Vicentini, Federico; Pedrocchi, Nicola; Malosio, Matteo; Molinari Tosatti, Lorenzo

    2014-09-01

    Robot-assisted neurorehabilitation often involves networked systems of sensors ("sensory rooms") and powerful devices in physical interaction with weak users. Safety is unquestionably a primary concern. Some lightweight robot platforms and devices designed on purpose include safety properties using redundant sensors or intrinsic safety design (e.g. compliance and backdrivability, limited exchange of energy). Nonetheless, the entire "sensory room" shall be required to be fail-safe and safely monitored as a system at large. Yet, sensor capabilities and control algorithms used in functional therapies require, in general, frequent updates or re-configurations, making a safety-grade release of such devices hardly sustainable in cost-effectiveness and development time. As such, promising integrated platforms for human-in-the-loop therapies could not find clinical application and manufacturing support because of lacking in the maintenance of global fail-safe properties. Under the general context of cross-machinery safety standards, the paper presents a methodology called SafeNet for helping in extending the safety rate of Human Robot Interaction (HRI) systems using unsafe components, including sensors and controllers. SafeNet considers, in fact, the robotic system as a device at large and applies the principles of functional safety (as in ISO 13489-1) through a set of architectural procedures and implementation rules. The enabled capability of monitoring a network of unsafe devices through redundant computational nodes, allows the usage of any custom sensors and algorithms, usually planned and assembled at therapy planning-time rather than at platform design-time. A case study is presented with an actual implementation of the proposed methodology. A specific architectural solution is applied to an example of robot-assisted upper-limb rehabilitation with online motion tracking. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Open core control software for surgical robots

    PubMed Central

    Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B.; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-01-01

    Object In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge “intelligent surgical robot” will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are “home-made” in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. Materials and methods In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a “force guide” for supporting operators to perform precise manipulation by using a master–slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. Results The Open Core Control software was implemented on a surgical master–slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a “force guide” on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. Conclusion In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement “General Principles of Software Validation” or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field. PMID:20033506

  7. Fabrication of strain gauge based sensors for tactile skins

    NASA Astrophysics Data System (ADS)

    Baptist, Joshua R.; Zhang, Ruoshi; Wei, Danming; Saadatzi, Mohammad Nasser; Popa, Dan O.

    2017-05-01

    Fabricating cost effective, reliable and functional sensors for electronic skins has been a challenging undertaking for the last several decades. Application of such skins include haptic interfaces, robotic manipulation, and physical human-robot interaction. Much of our recent work has focused on producing compliant sensors that can be easily formed around objects to sense normal, tension, or shear forces. Our past designs have involved the use of flexible sensors and interconnects fabricated on Kapton substrates, and piezoresistive inks that are 3D printed using Electro Hydro Dynamic (EHD) jetting onto interdigitated electrode (IDE) structures. However, EHD print heads require a specialized nozzle and the application of a high-voltage electric field; for which, tuning process parameters can be difficult based on the choice of inks and substrates. Therefore, in this paper we explore sensor fabrication techniques using a novel wet lift-off photolithographic technique for patterning the base polymer piezoresistive material, specifically Poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) or PEDOT:PSS. Fabricated sensors are electrically and thermally characterized, and temperaturecompensated designs are proposed and validated. Packaging techniques for sensors in polymer encapsulants are proposed and demonstrated to produce a tactile interface device for a robot.

  8. Efforts toward an autonomous wheelchair - biomed 2011.

    PubMed

    Barrett, Steven; Streeter, Robert

    2011-01-01

    An autonomous wheelchair is in development to provide mobility to those with significant physical challenges. The overall goal of the project is to develop a wheelchair that is fully autonomous with the ability to navigate about an environment and negotiate obstacles. As a starting point for the project, we have reversed engineered the joystick control system of an off-the-shelf commercially available wheelchair. The joystick control has been replaced with a microcontroller based system. The microcontroller has the capability to interface with a number of subsystems currently under development including wheel odometers, obstacle avoidance sensors, and ultrasonic-based wall sensors. This paper will discuss the microcontroller based system and provide a detailed system description. Results of this study may be adapted to commercial or military robot control.

  9. Learning robot actions based on self-organising language memory.

    PubMed

    Wermter, Stefan; Elshaw, Mark

    2003-01-01

    In the MirrorBot project we examine perceptual processes using models of cortical assemblies and mirror neurons to explore the emergence of semantic representations of actions, percepts and concepts in a neural robot. The hypothesis under investigation is whether a neural model will produce a life-like perception system for actions. In this context we focus in this paper on how instructions for actions can be modeled in a self-organising memory. Current approaches for robot control often do not use language and ignore neural learning. However, our approach uses language instruction and draws from the concepts of regional distributed modularity, self-organisation and neural assemblies. We describe a self-organising model that clusters actions into different locations depending on the body part they are associated with. In particular, we use actual sensor readings from the MIRA robot to represent semantic features of the action verbs. Furthermore, we outline a hierarchical computational model for a self-organising robot action control system using language for instruction.

  10. Surgical bedside master console for neurosurgical robotic system.

    PubMed

    Arata, Jumpei; Kenmotsu, Hajime; Takagi, Motoki; Hori, Tatsuya; Miyagi, Takahiro; Fujimoto, Hideo; Kajita, Yasukazu; Hayashi, Yuichiro; Chinzei, Kiyoyuki; Hashizume, Makoto

    2013-01-01

    We are currently developing a neurosurgical robotic system that facilitates access to residual tumors and improves brain tumor removal surgical outcomes. The system combines conventional and robotic surgery allowing for a quick conversion between the procedures. This concept requires a new master console that can be positioned at the surgical bedside and be sterilized. The master console was developed using new technologies, such as a parallel mechanism and pneumatic sensors. The parallel mechanism is a purely passive 5-DOF (degrees of freedom) joystick based on the author's haptic research. The parallel mechanism enables motion input of conventional brain tumor removal surgery with a compact, intuitive interface that can be used in a conventional surgical environment. In addition, the pneumatic sensors implemented on the mechanism provide an intuitive interface and electrically isolate the tool parts from the mechanism so they can be easily sterilized. The 5-DOF parallel mechanism is compact (17 cm width, 19cm depth, and 15cm height), provides a 505,050 mm and 90° workspace and is highly backdrivable (0.27N of resistance force representing the surgical motion). The evaluation tests revealed that the pneumatic sensors can properly measure the suction strength, grasping force, and hand contact. In addition, an installability test showed that the master console can be used in a conventional surgical environment. The proposed master console design was shown to be feasible for operative neurosurgery based on comprehensive testing. This master console is currently being tested for master-slave control with a surgical robotic system.

  11. Intelligent robot trends and predictions for the .net future

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.

    2001-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent and future technical and economic trends. During the past twenty years the use of industrial robots that are equipped not only with precise motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. Intelligent robot products have been developed in many cases for factory automation and for some hospital and home applications. To reach an even higher degree of applications, the addition of learning may be required. Recently, learning theories such as the adaptive critic have been proposed. In this type of learning, a critic provides a grade to the controller of an action module such as a robot. The adaptive critic is a good model for human learning. In general, the critic may be considered to be the human with the teach pendant, plant manager, line supervisor, quality inspector or the consumer. If the ultimate critic is the consumer, then the quality inspector must model the consumer's decision-making process and use this model in the design and manufacturing operations. Can the adaptive critic be used to advance intelligent robots? Intelligent robots have historically taken decades to be developed and reduced to practice. Methods for speeding this development include technology such as rapid prototyping and product development and government, industry and university cooperation.

  12. Intelligent robot trends and predictions for the first year of the new millennium

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.

    2000-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The current use of these machines in outer space, medicine, hazardous materials, defense applications and industry is being pursued with vigor. In factory automation, industrial robots can improve productivity, increase product quality and improve competitiveness. The computer and the robot have both been developed during recent times. The intelligent robot combines both technologies and requires a thorough understanding and knowledge of mechatronics. Today's robotic machines are faster, cheaper, more repeatable, more reliable and safer than ever. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has more than a billion-dollar market in the U.S. and is growing. Feasibility studies show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society. The fearful robot stories may help us prevent future disaster. The inspirational robot ideas may inspire the scientists of tomorrow. However, the intelligent robot ideas, which can be reduced to practice, will change the world.

  13. Reinforcement learning of periodical gaits in locomotion robots

    NASA Astrophysics Data System (ADS)

    Svinin, Mikhail; Yamada, Kazuyaki; Ushio, S.; Ueda, Kanji

    1999-08-01

    Emergence of stable gaits in locomotion robots is studied in this paper. A classifier system, implementing an instance- based reinforcement learning scheme, is used for sensory- motor control of an eight-legged mobile robot. Important feature of the classifier system is its ability to work with the continuous sensor space. The robot does not have a prior knowledge of the environment, its own internal model, and the goal coordinates. It is only assumed that the robot can acquire stable gaits by learning how to reach a light source. During the learning process the control system, is self-organized by reinforcement signals. Reaching the light source defines a global reward. Forward motion gets a local reward, while stepping back and falling down get a local punishment. Feasibility of the proposed self-organized system is tested under simulation and experiment. The control actions are specified at the leg level. It is shown that, as learning progresses, the number of the action rules in the classifier systems is stabilized to a certain level, corresponding to the acquired gait patterns.

  14. Development of a 3D Parallel Mechanism Robot Arm with Three Vertical-Axial Pneumatic Actuators Combined with a Stereo Vision System

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control. PMID:22247676

  15. An open architecture motion controller

    NASA Technical Reports Server (NTRS)

    Rossol, Lothar

    1994-01-01

    Nomad, an open architecture motion controller, is described. It is formed by a combination of TMOS, C-WORKS, and other utilities. Nomad software runs in a UNIX environment and provides for sensor-controlled robotic motions, with user replaceable kinematics. It can also be tailored for highly specialized applications. Open controllers such as Nomad should have a major impact on the robotics industry.

  16. A Robot Equipped with a High-Speed LSPR Gas Sensor Module for Collecting Spatial Odor Information from On-Ground Invisible Odor Sources.

    PubMed

    Yang, Zhongyuan; Sassa, Fumihiro; Hayashi, Kenshi

    2018-06-22

    Improving the efficiency of detecting the spatial distribution of gas information with a mobile robot is a great challenge that requires rapid sample collection, which is basically determined by the speed of operation of gas sensors. The present work developed a robot equipped with a high-speed gas sensor module based on localized surface plasmon resonance. The sensor module is designed to sample gases from an on-ground odor source, such as a footprint material or artificial odor marker, via a fine sampling tubing. The tip of the sampling tubing was placed close to the ground to reduce the sampling time and the effect of natural gas diffusion. On-ground ethanol odor sources were detected by the robot at high resolution (i.e., 2.5 cm when the robot moved at 10 cm/s), and the reading of gas information was demonstrated experimentally. This work may help in the development of environmental sensing robots, such as the development of odor source mapping and multirobot systems with pheromone tracing.

  17. Real Time Target Tracking Using Dedicated Vision Hardware

    NASA Astrophysics Data System (ADS)

    Kambies, Keith; Walsh, Peter

    1988-03-01

    This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.

  18. Sensitive and Flexible Polymeric Strain Sensor for Accurate Human Motion Monitoring

    PubMed Central

    Khan, Hassan; Kottapalli, Ajay; Asadnia, Mohsen

    2018-01-01

    Flexible electronic devices offer the capability to integrate and adapt with human body. These devices are mountable on surfaces with various shapes, which allow us to attach them to clothes or directly onto the body. This paper suggests a facile fabrication strategy via electrospinning to develop a stretchable, and sensitive poly (vinylidene fluoride) nanofibrous strain sensor for human motion monitoring. A complete characterization on the single PVDF nano fiber has been performed. The charge generated by PVDF electrospun strain sensor changes was employed as a parameter to control the finger motion of the robotic arm. As a proof of concept, we developed a smart glove with five sensors integrated into it to detect the fingers motion and transfer it to a robotic hand. Our results shows that the proposed strain sensors are able to detect tiny motion of fingers and successfully run the robotic hand. PMID:29389851

  19. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System.

    PubMed

    Wu, Defeng; Chen, Tianfei; Li, Aiguo

    2016-08-30

    A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  20. Performance of a scanning laser line striper in outdoor lighting

    NASA Astrophysics Data System (ADS)

    Mertz, Christoph

    2013-05-01

    For search and rescue robots and reconnaissance robots it is important to detect objects in their vicinity. We have developed a scanning laser line striper that can produce dense 3D images using active illumination. The scanner consists of a camera and a MEMS-micro mirror based projector. It can also detect the presence of optically difficult material like glass and metal. The sensor can be used for autonomous operation or it can help a human operator to better remotely control the robot. In this paper we will evaluate the performance of the scanner under outdoor illumination, i.e. from operating in the shade to operating in full sunlight. We report the range, resolution and accuracy of the sensor and its ability to reconstruct objects like grass, wooden blocks, wires, metal objects, electronic devices like cell phones, blank RPG, and other inert explosive devices. Furthermore we evaluate its ability to detect the presence of glass and polished metal objects. Lastly we report on a user study that shows a significant improvement in a grasping task. The user is tasked with grasping a wire with the remotely controlled hand of a robot. We compare the time it takes to complete the task using the 3D scanner with using a traditional video camera.

  1. Automated Guided Vehicle For Phsically Handicapped People - A Cost Effective Approach

    NASA Astrophysics Data System (ADS)

    Kumar, G. Arun, Dr.; Sivasubramaniam, Mr. A.

    2017-12-01

    Automated Guided vehicle (AGV) is like a robot that can deliver the materials from the supply area to the technician automatically. This is faster and more efficient. The robot can be accessed wirelessly. A technician can directly control the robot to deliver the components rather than control it via a human operator (over phone, computer etc. who has to program the robot or ask a delivery person to make the delivery). The vehicle is automatically guided through its ways. To avoid collisions a proximity sensor is attached to the system. The sensor senses the signals of the obstacles and can stop the vehicle in the presence of obstacles. Thus vehicle can avoid accidents that can be very useful to the present industrial trend and material handling and equipment handling will be automated and easy time saving methodology.

  2. Capaciflector-guided mechanisms

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1996-01-01

    A plurality of capaciflector proximity sensors, one or more of which may be overlaid on each other, and at least one shield are mounted on a device guided by a robot so as to see a designated surface, hole or raised portion of an object, for example, in three dimensions. Individual current-measuring voltage follower circuits interface the sensors and shield to a common AC signal source. As the device approaches the object, the sensors respond by a change in the currents therethrough. The currents are detected by the respective current-measuring voltage follower circuits with the outputs thereof being fed to a robot controller. The device is caused to move under robot control in a predetermined pattern over the object while directly referencing each other without any offsets, whereupon by a process of minimization of the sensed currents, the device is dithered or wiggled into position for a soft touchdown or contact without any prior contact with the object.

  3. Mapping From an Instrumented Glove to a Robot Hand

    NASA Technical Reports Server (NTRS)

    Goza, Michael

    2005-01-01

    An algorithm has been developed to solve the problem of mapping from (1) a glove instrumented with joint-angle sensors to (2) an anthropomorphic robot hand. Such a mapping is needed to generate control signals to make the robot hand mimic the configuration of the hand of a human attempting to control the robot. The mapping problem is complicated by uncertainties in sensor locations caused by variations in sizes and shapes of hands and variations in the fit of the glove. The present mapping algorithm is robust in the face of these uncertainties, largely because it includes a calibration sub-algorithm that inherently adapts the mapping to the specific hand and glove, without need for measuring the hand and without regard for goodness of fit. The algorithm utilizes a forward-kinematics model of the glove derived from documentation provided by the manufacturer of the glove. In this case, forward-kinematics model signifies a mathematical model of the glove fingertip positions as functions of the sensor readings. More specifically, given the sensor readings, the forward-kinematics model calculates the glove fingertip positions in a Cartesian reference frame nominally attached to the palm. The algorithm also utilizes an inverse-kinematics model of the robot hand. In this case, inverse-kinematics model signifies a mathematical model of the robot finger-joint angles as functions of the robot fingertip positions. Again, more specifically, the inverse-kinematics model calculates the finger-joint commands needed to place the fingertips at specified positions in a Cartesian reference frame that is attached to the palm of the robot hand and that nominally corresponds to the Cartesian reference frame attached to the palm of the glove. Initially, because of the aforementioned uncertainties, the glove fingertip positions calculated by the forwardkinematics model in the glove Cartesian reference frame cannot be expected to match the robot fingertip positions in the robot-hand Cartesian reference frame. A calibration must be performed to make the glove and robot-hand fingertip positions correspond more precisely. The calibration procedure involves a few simple hand poses designed to provide well-defined fingertip positions. One of the poses is a fist. In each of the other poses, a finger touches the thumb. The calibration subalgorithm uses the sensor readings from these poses to modify the kinematical models to make the two sets of fingertip positions agree more closely.

  4. Instrumented Compliant Wrist with Proximity and Contact Sensing for Close Robot Interaction Control.

    PubMed

    Laferrière, Pascal; Payeur, Pierre

    2017-06-14

    Compliance has been exploited in various forms in robotic systems to allow rigid mechanisms to come into contact with fragile objects, or with complex shapes that cannot be accurately modeled. Force feedback control has been the classical approach for providing compliance in robotic systems. However, by integrating other forms of instrumentation with compliance into a single device, it is possible to extend close monitoring of nearby objects before and after contact occurs. As a result, safer and smoother robot control can be achieved both while approaching and while touching surfaces. This paper presents the design and extensive experimental evaluation of a versatile, lightweight, and low-cost instrumented compliant wrist mechanism which can be mounted on any rigid robotic manipulator in order to introduce a layer of compliance while providing the controller with extra sensing signals during close interaction with an object's surface. Arrays of embedded range sensors provide real-time measurements on the position and orientation of surfaces, either located in proximity or in contact with the robot's end-effector, which permits close guidance of its operation. Calibration procedures are formulated to overcome inter-sensor variability and achieve the highest available resolution. A versatile solution is created by embedding all signal processing, while wireless transmission connects the device to any industrial robot's controller to support path control. Experimental work demonstrates the device's physical compliance as well as the stability and accuracy of the device outputs. Primary applications of the proposed instrumented compliant wrist include smooth surface following in manufacturing, inspection, and safe human-robot interaction.

  5. [Advanced Development for Space Robotics With Emphasis on Fault Tolerance Technology

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert

    1997-01-01

    This report describes work developing fault tolerant redundant robotic architectures and adaptive control strategies for robotic manipulator systems which can dynamically accommodate drastic robot manipulator mechanism, sensor or control failures and maintain stable end-point trajectory control with minimum disturbance. Kinematic designs of redundant, modular, reconfigurable arms for fault tolerance were pursued at a fundamental level. The approach developed robotic testbeds to evaluate disturbance responses of fault tolerant concepts in robotic mechanisms and controllers. The development was implemented in various fault tolerant mechanism testbeds including duality in the joint servo motor modules, parallel and serial structural architectures, and dual arms. All have real-time adaptive controller technologies to react to mechanism or controller disturbances (failures) to perform real-time reconfiguration to continue the task operations. The developments fall into three main areas: hardware, software, and theoretical.

  6. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles

    PubMed Central

    Hsu, Bing-Cheng

    2018-01-01

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme. PMID:29757940

  7. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles.

    PubMed

    Lin, Chi-Ying; Hsu, Bing-Cheng

    2018-05-14

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.

  8. Self calibrating autoTRAC

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1994-01-01

    The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.

  9. Neural network-based landmark detection for mobile robot

    NASA Astrophysics Data System (ADS)

    Sekiguchi, Minoru; Okada, Hiroyuki; Watanabe, Nobuo

    1996-03-01

    The mobile robot can essentially have only the relative position data for the real world. However, there are many cases that the robot has to know where it is located. In those cases, the useful method is to detect landmarks in the real world and adjust its position using detected landmarks. In this point of view, it is essential to develop a mobile robot that can accomplish the path plan successfully using natural or artificial landmarks. However, artificial landmarks are often difficult to construct and natural landmarks are very complicated to detect. In this paper, the method of acquiring landmarks by using the sensor data from the mobile robot necessary for planning the path is described. The landmark we discuss here is the natural one and is composed of the compression of sensor data from the robot. The sensor data is compressed and memorized by using five layered neural network that is called a sand glass model. The input and output data that neural network should learn is the sensor data of the robot that are exactly the same. Using the intermediate output data of the network, a compressed data is obtained, which expresses a landmark data. If the sensor data is ambiguous or enormous, it is easy to detect the landmark because the data is compressed and classified by the neural network. Using the backward three layers, the compressed landmark data is expanded to original data at some level. The studied neural network categorizes the detected sensor data to the known landmark.

  10. The WCSAR telerobotics test bed

    NASA Technical Reports Server (NTRS)

    Duffie, N.; Zik, J.; Teeter, R.; Crabb, T.

    1988-01-01

    Component technologies for use in telerobotic systems for space are being developed. As part of this effort, a test bed was established in which these technologies can be verified and integrated into telerobotic systems. The facility consists of two slave industrial robots, an articulated master arm controller, a cartesian coordinate master arm controller, and a variety of sensors, displays and stimulators for feedback to human operators. The controller of one of the slave robots remains in its commercial state, while the controller of the other robot has been replaced with a new controller that achieves high-performance in telerobotic operating modes. A dexterous slave hand which consists of two fingers and a thumb is being developed, along with a number of force-reflecting and non-force reflecting master hands, wrists and arms. A tactile sensing finger tip based on piezo-film technology has been developed, along with tactile stimulators and CAD-based displays for sensory feedback and sensory substitution. The telerobotics test bed and its component technologies are described, as well as the integration of these component technologies into telerobotic systems, and their performance in conjunction with human operators.

  11. On the development of a reactive sensor-based robotic system

    NASA Technical Reports Server (NTRS)

    Hexmoor, Henry H.; Underwood, William E., Jr.

    1989-01-01

    Flexible robotic systems for space applications need to use local information to guide their action in uncertain environments where the state of the environment and even the goals may change. They have to be tolerant of unexpected events and robust enough to carry their task to completion. Tactical goals should be modified while maintaining strategic goals. Furthermore, reactive robotic systems need to have a broader view of their environments than sensory-based systems. An architecture and a theory of representation extending the basic cycles of action and perception are described. This scheme allows for dynamic description of the environment and determining purposive and timely action. Applications of this scheme for assembly and repair tasks using a Universal Machine Intelligence RTX robot are being explored, but the ideas are extendable to other domains. The nature of reactivity for sensor-based robotic systems and implementation issues encountered in developing a prototype are discussed.

  12. RoCoMAR: robots' controllable mobility aided routing and relay architecture for mobile sensor networks.

    PubMed

    Le, Duc Van; Oh, Hoon; Yoon, Seokhoon

    2013-07-05

    In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay.

  13. RoCoMAR: Robots' Controllable Mobility Aided Routing and Relay Architecture for Mobile Sensor Networks

    PubMed Central

    Van Le, Duc; Oh, Hoon; Yoon, Seokhoon

    2013-01-01

    In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay. PMID:23881134

  14. Enhanced control and sensing for the REMOTEC ANDROS Mk VI robot. CRADA final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spelt, P.F.; Harvey, H.W.

    1998-08-01

    This Cooperative Research and Development Agreement (CRADA) between Lockheed Martin Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less

  15. Robot navigation research at CESAR (Center for Engineering Systems Advanced Research)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, D.L.; de Saussure, G.; Pin, F.G.

    1989-01-01

    A considerable amount of work has been reported on the problem of robot navigation in known static terrains. Algorithms have been proposed and implemented to search for an optimum path to the goal, taking into account the finite size and shape of the robot. Not as much work has been reported on robot navigation in unknown, unstructured, or dynamic environments. A robot navigating in an unknown environment must explore with its sensors, construct an abstract representation of its global environment to plan a path to the goal, and update or revise its plan based on accumulated data obtained and processedmore » in real-time. The core of the navigation program for the CESAR robots is a production system developed on the expert-system-shell CLIPS which runs on an NCUBE hypercube on board the robot. The production system can call on C-compiled navigation procedures. The production rules can read the sensor data and address the robot's effectors. This architecture was found efficient and flexible for the development and testing of the navigation algorithms; however, in order to process intelligently unexpected emergencies, it was found necessary to be able to control the production system through externally generated asynchronous data. This led to the design of a new asynchronous production system, APS, which is now being developed on the robot. This paper will review some of the navigation algorithms developed and tested at CESAR and will discuss the need for the new APS and how it is being integrated into the robot architecture. 18 refs., 3 figs., 1 tab.« less

  16. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  17. Highly stretchable and wearable graphene strain sensors with controllable sensitivity for human motion monitoring.

    PubMed

    Park, Jung Jin; Hyun, Woo Jin; Mun, Sung Cik; Park, Yong Tae; Park, O Ok

    2015-03-25

    Because of their outstanding electrical and mechanical properties, graphene strain sensors have attracted extensive attention for electronic applications in virtual reality, robotics, medical diagnostics, and healthcare. Although several strain sensors based on graphene have been reported, the stretchability and sensitivity of these sensors remain limited, and also there is a pressing need to develop a practical fabrication process. This paper reports the fabrication and characterization of new types of graphene strain sensors based on stretchable yarns. Highly stretchable, sensitive, and wearable sensors are realized by a layer-by-layer assembly method that is simple, low-cost, scalable, and solution-processable. Because of the yarn structures, these sensors exhibit high stretchability (up to 150%) and versatility, and can detect both large- and small-scale human motions. For this study, wearable electronics are fabricated with implanted sensors that can monitor diverse human motions, including joint movement, phonation, swallowing, and breathing.

  18. Mathematical model for adaptive control system of ASEA robot at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Zia, Omar

    1989-01-01

    The dynamic properties and the mathematical model for the adaptive control of the robotic system presently under investigation at Robotic Application and Development Laboratory at Kennedy Space Center are discussed. NASA is currently investigating the use of robotic manipulators for mating and demating of fuel lines to the Space Shuttle Vehicle prior to launch. The Robotic system used as a testbed for this purpose is an ASEA IRB-90 industrial robot with adaptive control capabilities. The system was tested and it's performance with respect to stability was improved by using an analogue force controller. The objective of this research project is to determine the mathematical model of the system operating under force feedback control with varying dynamic internal perturbation in order to provide continuous stable operation under variable load conditions. A series of lumped parameter models are developed. The models include some effects of robot structural dynamics, sensor compliance, and workpiece dynamics.

  19. Implementing real-time robotic systems using CHIMERA II

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1990-01-01

    A description is given of the CHIMERA II programming environment and operating system, which was developed for implementing real-time robotic systems. Sensor-based robotic systems contain both general- and special-purpose hardware, and thus the development of applications tends to be a time-consuming task. The CHIMERA II environment is designed to reduce the development time by providing a convenient software interface between the hardware and the user. CHIMERA II supports flexible hardware configurations which are based on one or more VME-backplanes. All communication across multiple processors is transparent to the user through an extensive set of interprocessor communication primitives. CHIMERA II also provides a high-performance real-time kernel which supports both deadline and highest-priority-first scheduling. The flexibility of CHIMERA II allows hierarchical models for robot control, such as NASREM, to be implemented with minimal programming time and effort.

  20. Modeling and sensory feedback control for space manipulators

    NASA Technical Reports Server (NTRS)

    Masutani, Yasuhiro; Miyazaki, Fumio; Arimoto, Suguru

    1989-01-01

    The positioning control problem of the endtip of space manipulators whose base are uncontrolled is examined. In such a case, the conventional control method for industrial robots based on a local feedback at each joint is not applicable, because a solution of the joint displacements that satisfies a given position and orientation of the endtip is not decided uniquely. A sensory feedback control scheme for space manipulators based on an artificial potential defined in a task-oriented coordinates is proposed. Using this scheme, the controller can easily determine the input torque of each joint from the data of an external sensor such as a visual device. Since the external sensor is mounted on the unfixed base, the manipulator must track the moving image of the target in sensor coordinates. Moreover the dynamics of the base and the manipulator are interactive. However, the endtip is proven to asymptotically approach the stationary target in an inertial coordinate frame by the Liapunov's method. Finally results of computer simulation for a 6-link space manipulator model show the effectiveness of the proposed scheme.

  1. Control of motion stability of the line tracer robot using fuzzy logic and kalman filter

    NASA Astrophysics Data System (ADS)

    Novelan, M. S.; Tulus; Zamzami, E. M.

    2018-03-01

    Setting of motion and balance line tracer robot two wheels is actually a combination of a two-wheeled robot balance concept and the concept of line follower robot. The main objective of this research is to maintain the robot in an upright and can move to follow the line of the Wizard while maintaining balance. In this study the motion balance system on line tracer robot by considering the presence of a noise, so that it takes the estimator is used to mengestimasi the line tracer robot motion. The estimation is done by the method of Kalman Filter and the combination of Fuzzy logic-Fuzzy Kalman Filter called Kalman Filter, as well as optimal smooting. Based on the results of the study, the value of the output of the fuzzy results obtained from the sensor input value has been filtered before entering the calculation of the fuzzy. The results of the output of the fuzzy logic hasn’t been able to control dc motors are well balanced at the moment to be able to run. The results of the fuzzy logic by using membership function of triangular membership function or yet can control with good dc motor movement in order to be balanced

  2. Promoting Diversity in Undergraduate Research in Robotics-Based Seismic

    NASA Astrophysics Data System (ADS)

    Gifford, C. M.; Arthur, C. L.; Carmichael, B. L.; Webber, G. K.; Agah, A.

    2006-12-01

    The motivation for this research was to investigate forming evenly-spaced grid patterns with a team of mobile robots for future use in seismic imaging in polar environments. A team of robots was incrementally designed and simulated by incorporating sensors and altering each robot's controller. Challenges, design issues, and efficiency were also addressed. This research project incorporated the efforts of two undergraduate REU students from Elizabeth City State University (ECSU) in North Carolina, and the research staff at the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas. ECSU is a historically black university. Mentoring these two minority students in scientific research, seismic, robotics, and simulation will hopefully encourage them to pursue graduate degrees in science-related or engineering fields. The goals for this 10-week internship during summer 2006 were to educate the students in the fields of seismology, robotics, and virtual prototyping and simulation. Incrementally designing a robot platform for future enhancement and evaluation was central to this research, and involved simulation of several robots working together to change seismic grid shape and spacing. This process gave these undergraduate students experience and knowledge in an actual research project for a real-world application. The two undergraduate students gained valuable research experience and advanced their knowledge of seismic imaging, robotics, sensors, and simulation. They learned that seismic sensors can be used in an array to gather 2D and 3D images of the subsurface. They also learned that robotics can support dangerous or difficult human activities, such as those in a harsh polar environment, by increasing automation, robustness, and precision. Simulating robot designs also gave them experience in programming behaviors for mobile robots. Thus far, one academic paper has resulted from their research. This paper received third place at the 2006 National Technical Association's (NTA) National Conference in Chicago. CReSIS, in conjunction with ECSU, provided these minority students with a well-rounded educational experience in a real-world research project. Their contributions will be used for future projects.

  3. Output feedback control for a class of nonlinear systems with actuator degradation and sensor noise.

    PubMed

    Ai, Weiqing; Lu, Zhenli; Li, Bin; Fei, Shumin

    2016-11-01

    This paper investigates the output feedback control problem of a class of nonlinear systems with sensor noise and actuator degradation. Firstly, by using the descriptor observer approach, the origin system is transformed into a descriptor system. On the basis of the descriptor system, a novel Proportional Derivative (PD) observer is developed to asymptotically estimate sensor noise and system state simultaneously. Then, by designing an adaptive law to estimate the effectiveness of actuator, an adaptive observer-based controller is constructed to ensure that system state can be regulated to the origin asymptotically. Finally, the design scheme is applied to address a flexible joint robot link problem. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Fabrication and characterization of bending and pressure sensors for a soft prosthetic hand

    NASA Astrophysics Data System (ADS)

    Rocha, Rui Pedro; Alhais Lopes, Pedro; de Almeida, Anibal T.; Tavakoli, Mahmoud; Majidi, Carmel

    2018-03-01

    We demonstrate fabrication, characterization, and implementation of ‘soft-matter’ pressure and bending sensors for a soft robotic hand. The elastomer-based sensors are embedded in a robot finger composed of a 3D printed endoskeleton and covered by an elastomeric skin. Two types of sensors are evaluated, resistive pressure sensors and capacitive pressure sensors. The sensor is fabricated entirely out of insulating and conductive rubber, the latter composed of polydimethylsiloxane (PDMS) elastomer embedded with a percolating network of structured carbon black (CB). The sensor-integrated fingers have a simple materials architecture, can be fabricated with standard rapid prototyping methods, and are inexpensive to produce. When incorporated into a robotic hand, the CB-PDMS sensors and PDMS carrier medium function as an ‘artificial skin’ for touch and bend detection. Results show improved response with a capacitive sensor architecture, which, unlike a resistive sensor, is robust to electromechanical hysteresis, creep, and drift in the CB-PDMS composite. The sensorized fingers are integrated in an anthropomorphic hand and results for a variety of grasping tasks are presented.

  5. Advanced wireless mobile collaborative sensing network for tactical and strategic missions

    NASA Astrophysics Data System (ADS)

    Xu, Hao

    2017-05-01

    In this paper, an advanced wireless mobile collaborative sensing network will be developed. Through properly combining wireless sensor network, emerging mobile robots and multi-antenna sensing/communication techniques, we could demonstrate superiority of developed sensing network. To be concrete, heterogeneous mobile robots including unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) are equipped with multi-model sensors and wireless transceiver antennas. Through real-time collaborative formation control, multiple mobile robots can team the best formation that can provide most accurate sensing results. Also, formatting multiple mobile robots can also construct a multiple-input multiple-output (MIMO) communication system that can provide a reliable and high performance communication network.

  6. Design and Implementation of Multifunctional Automatic Drilling End Effector

    NASA Astrophysics Data System (ADS)

    Wang, Zhanxi; Qin, Xiansheng; Bai, Jing; Tan, Xiaoqun; Li, Jing

    2017-03-01

    In order to realize the automatic drilling in aircraft assembly, a drilling end effector is designed by integrating the pressure unit, drilling unit, measurement unit, control system and frame structure. In order to reduce the hole deviation, this paper proposes a vertical normal adjustment program based on 4 laser distance sensors. The actual normal direction of workpiece surface can be calculated through the sensors measurements, and then robot posture is adjusted to realize the hole deviation correction. A base detection method is proposed to detect and locate the hole automatically by using the camera and the reference hole. The experiment results show that the position accuracy of the system is less than 0.3mm, and the normal precision is less than 0.5°. The drilling end effector and robot can greatly improve the efficiency of the aircraft parts and assembly quality, and reduce the product development cycle.

  7. A New Controller for a Smart Walker Based on Human-Robot Formation

    PubMed Central

    Valadão, Carlos; Caldeira, Eliete; Bastos-Filho, Teodiano; Frizera-Neto, Anselmo; Carelli, Ricardo

    2016-01-01

    This paper presents the development of a smart walker that uses a formation controller in its displacements. Encoders, a laser range finder and ultrasound are the sensors used in the walker. The control actions are based on the user (human) location, who is the actual formation leader. There is neither a sensor attached to the user’s body nor force sensors attached to the arm supports of the walker, and thus, the control algorithm projects the measurements taken from the laser sensor into the user reference and, then, calculates the linear and angular walker’s velocity to keep the formation (distance and angle) in relation to the user. An algorithm was developed to detect the user’s legs, whose distances from the laser sensor provide the information necessary to the controller. The controller was theoretically analyzed regarding its stability, simulated and validated with real users, showing accurate performance in all experiments. In addition, safety rules are used to check both the user and the device conditions, in order to guarantee that the user will not have any risks when using the smart walker. The applicability of this device is for helping people with lower limb mobility impairments. PMID:27447634

  8. Force Sensing Resistor (FSR): a brief overview and the low-cost sensor for active compliance control

    NASA Astrophysics Data System (ADS)

    Sadun, A. S.; Jalani, J.; Sukor, J. A.

    2016-07-01

    Force Sensing Resistors (FSR) sensors are devices that allow measuring static and dynamic forces applied to a contact surface. Their range of responses is basically depending on the variation of its electric resistance. In general, Flexiforce and Interlink are two common types of FSR sensors that are available, cheap and easily found in the market. Studies have shown that the FSR sensors are usually applied for robotic grippers and for biomechanical fields. This paper provides a brief overview of the application of the FSR sensors. Subsequently, two different set of experiments are carried out to test the effectiveness of the Flexiforce and Interlink sensors. First, the hardness detector system (Case Study A) and second, the force-position control system (Case Study B). The hardware used for the experiment was developed from low-cost materials. The results revealed that both FSR sensors are sufficient and reliable to provide a good sensing modality particularly for measuring force. Apart from the low-cost sensors, essentially, the FSR sensors are very useful devices that able to provide a good active compliance control, particularly for the grasping robotic hand.

  9. Slip detection with accelerometer and tactile sensors in a robotic hand model

    NASA Astrophysics Data System (ADS)

    Al-Shanoon, Abdulrahman Abdulkareem S.; Anom Ahmad, Siti; Hassan, Mohd. Khair b.

    2015-11-01

    Grasp planning is an interesting issue in studies that dedicated efforts to investigate tactile sensors. This study investigated the physical force interaction between a tactile pressure sensor and a particular object. It also characterized object slipping during gripping operations and presented secure regripping of an object. Acceleration force was analyzed using an accelerometer sensor to establish a completely autonomous robotic hand model. An automatic feedback control system was applied to regrip the particular object when it commences to slip. Empirical findings were presented in consideration of the detection and subsequent control of the slippage situation. These findings revealed the correlation between the distance of the object slipping and the required force to regrip the object safely. This approach is similar to Hooke's law formula.

  10. A simple 5-DOF walking robot for space station application

    NASA Technical Reports Server (NTRS)

    Brown, H. Benjamin, Jr.; Friedman, Mark B.; Kanade, Takeo

    1991-01-01

    Robots on the NASA space station have a potential range of applications from assisting astronauts during EVA (extravehicular activity), to replacing astronauts in the performance of simple, dangerous, and tedious tasks; and to performing routine tasks such as inspections of structures and utilities. To provide a vehicle for demonstrating the pertinent technologies, a simple robot is being developed for locomotion and basic manipulation on the proposed space station. In addition to the robot, an experimental testbed was developed, including a 1/3 scale (1.67 meter modules) truss and a gravity compensation system to simulate a zero-gravity environment. The robot comprises two flexible links connected by a rotary joint, with a 2 degree of freedom wrist joints and grippers at each end. The grippers screw into threaded holes in the nodes of the space station truss, and enable it to walk by alternately shifting the base of support from one foot (gripper) to the other. Present efforts are focused on mechanical design, application of sensors, and development of control algorithms for lightweight, flexible structures. Long-range research will emphasize development of human interfaces to permit a range of control modes from teleoperated to semiautonomous, and coordination of robot/astronaut and multiple-robot teams.

  11. Evaluating the Dynamics of Agent-Environment Interaction

    DTIC Science & Technology

    2001-05-01

    a color sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangulation system for...Cooperative Mobile Robot Control’, Autonomous Robots 4(4), 387{403. Vaughan, R. T., Sty, K., Sukhatme, G. S. & Mataric, M. J. (2000), Whistling in the Dark...sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangu- lation system for

  12. Localization and Mapping Using Only a Rotating FMCW Radar Sensor

    PubMed Central

    Vivet, Damien; Checchin, Paul; Chapuis, Roland

    2013-01-01

    Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicle's displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed. PMID:23567523

  13. Localization and mapping using only a rotating FMCW radar sensor.

    PubMed

    Vivet, Damien; Checchin, Paul; Chapuis, Roland

    2013-04-08

    Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicle's displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed.

  14. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    PubMed Central

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  15. Localization and Tracking of Implantable Biomedical Sensors

    PubMed Central

    Umay, Ilknur; Fidan, Barış; Barshan, Billur

    2017-01-01

    Implantable sensor systems are effective tools for biomedical diagnosis, visualization and treatment of various health conditions, attracting the interest of researchers, as well as healthcare practitioners. These systems efficiently and conveniently provide essential data of the body part being diagnosed, such as gastrointestinal (temperature, pH, pressure) parameter values, blood glucose and pressure levels and electrocardiogram data. Such data are first transmitted from the implantable sensor units to an external receiver node or network and then to a central monitoring and control (computer) unit for analysis, diagnosis and/or treatment. Implantable sensor units are typically in the form of mobile microrobotic capsules or implanted stationary (body-fixed) units. In particular, capsule-based systems have attracted significant research interest recently, with a variety of applications, including endoscopy, microsurgery, drug delivery and biopsy. In such implantable sensor systems, one of the most challenging problems is the accurate localization and tracking of the microrobotic sensor unit (e.g., robotic capsule) inside the human body. This article presents a literature review of the existing localization and tracking techniques for robotic implantable sensor systems with their merits and limitations and possible solutions of the proposed localization methods. The article also provides a brief discussion on the connection and cooperation of such techniques with wearable biomedical sensor systems. PMID:28335384

  16. Nonholonomic Ofject Tracking with Optical Sensors and Ofject Recognition Feedback

    NASA Technical Reports Server (NTRS)

    Goddard, R. E.; Hadaegh, F.

    1994-01-01

    Robotic controllers frequently operate under constraints. Often, the constraints are imperfectly or completely unknown. In this paper, the Lagrangian dynamics of a planar robot arm are expressed as a function of a globally unknown consraint.

  17. Remotely controlling of mobile robots using gesture captured by the Kinect and recognized by machine learning method

    NASA Astrophysics Data System (ADS)

    Hsu, Roy CHaoming; Jian, Jhih-Wei; Lin, Chih-Chuan; Lai, Chien-Hung; Liu, Cheng-Ting

    2013-01-01

    The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.

  18. Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor

    NASA Technical Reports Server (NTRS)

    Prinz, F. B. S.; Mahalingam, S.

    1992-01-01

    A capacitance based proximity sensor, the 'Capaciflector' (Vranish 92), has been developed at the Goddard Space Flight Center of NASA. We had investigated the use of this sensor for avoiding and maneuvering around unexpected objects (Mahalingam 92). The approach developed there would help in executing collision-free gross motions. Another important aspect of robot motion planning is fine motion planning. Let us classify manipulator robot motion planning into two groups at the task level: gross motion planning and fine motion planning. We use the term 'gross planning' where the major degrees of freedom of the robot execute large motions, for example, the motion of a robot in a pick and place type operation. We use the term 'fine motion' to indicate motions of the robot where the large dofs do not move much, and move far less than the mirror dofs, such as in inserting a peg in a hole. In this report we describe our experiments and experiences in this area.

  19. IECON '87: Industrial applications of control and simulation; Proceedings of the 1987 International Conference on Industrial Electronics, Control, and Instrumentation, Cambridge, MA, Nov. 3, 4, 1987

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T. (Editor)

    1987-01-01

    Recent advances in control-system design and simulation are discussed in reviews and reports. Among the topics considered are fast algorithms for generating near-optimal binary decision programs, trajectory control of robot manipulators with compensation of load effects via a six-axis force sensor, matrix integrators for real-time simulation, a high-level control language for an autonomous land vehicle, and a practical engineering design method for stable model-reference adaptive systems. Also addressed are the identification and control of flexible-limb robots with unknown loads, adaptive control and robust adaptive control for manipulators with feedforward compensation, adaptive pole-placement controllers with predictive action, variable-structure strategies for motion control, and digital signal-processor-based variable-structure controls.

  20. Data management for biofied building

    NASA Astrophysics Data System (ADS)

    Matsuura, Kohta; Mita, Akira

    2015-03-01

    Recently, Smart houses have been studied by many researchers to satisfy individual demands of residents. However, they are not feasible yet as they are very costly and require many sensors to be embedded into houses. Therefore, we suggest "Biofied Building". In Biofied Building, sensor agent robots conduct sensing, actuation, and control in their house. The robots monitor many parameters of human lives such as walking postures and emotion continuously. In this paper, a prototype network system and a data model for practical application for Biofied Building is pro-posed. In the system, functions of robots and servers are divided according to service flows in Biofield Buildings. The data model is designed to accumulate both the building data and the residents' data. Data sent from the robots and data analyzed in the servers are automatically registered into the database. Lastly, feasibility of this system is verified through lighting control simulation performed in an office space.

  1. Selected aspects of microelectronics technology and applications: Numerically controlled machine tools. Technology trends series no. 2

    NASA Astrophysics Data System (ADS)

    Sigurdson, J.; Tagerud, J.

    1986-05-01

    A UNIDO publication about machine tools with automatic control discusses the following: (1) numerical control (NC) machine tool perspectives, definition of NC, flexible manufacturing systems, robots and their industrial application, research and development, and sensors; (2) experience in developing a capability in NC machine tools; (3) policy issues; (4) procedures for retrieval of relevant documentation from data bases. Diagrams, statistics, bibliography are included.

  2. Parallel Microcracks-based Ultrasensitive and Highly Stretchable Strain Sensors.

    PubMed

    Amjadi, Morteza; Turan, Mehmet; Clementson, Cameron P; Sitti, Metin

    2016-03-02

    There is an increasing demand for flexible, skin-attachable, and wearable strain sensors due to their various potential applications. However, achieving strain sensors with both high sensitivity and high stretchability is still a grand challenge. Here, we propose highly sensitive and stretchable strain sensors based on the reversible microcrack formation in composite thin films. Controllable parallel microcracks are generated in graphite thin films coated on elastomer films. Sensors made of graphite thin films with short microcracks possess high gauge factors (maximum value of 522.6) and stretchability (ε ≥ 50%), whereas sensors with long microcracks show ultrahigh sensitivity (maximum value of 11,344) with limited stretchability (ε ≤ 50%). We demonstrate the high performance strain sensing of our sensors in both small and large strain sensing applications such as human physiological activity recognition, human body large motion capturing, vibration detection, pressure sensing, and soft robotics.

  3. Recent results in visual servoing

    NASA Astrophysics Data System (ADS)

    Chaumette, François

    2008-06-01

    Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.

  4. Development and control of a magnetorheological haptic device for robot assisted surgery.

    PubMed

    Shokrollahi, Elnaz; Goldenberg, Andrew A; Drake, James M; Eastwood, Kyle W; Kang, Matthew

    2017-07-01

    A prototype magnetorheological (MR) fluid-based actuator has been designed for tele-robotic surgical applications. This device is capable of generating forces up to 47 N, with input currents ranging from 0 to 1.5 A. We begin by outlining the physical design of the device, and then discuss a novel nonlinear model of the device's behavior. The model was developed using the Hammerstein-Wiener (H-W) nonlinear black-box technique and is intended to accurately capture the hysteresis behavior of the MR-fluid. Several experiments were conducted on the device to collect estimation and validation datasets to construct the model and assess its performance. Different estimating functions were used to construct the model, and their effectiveness is assessed based on goodness-of-fit and final-prediction-error measurements. A sigmoid network was found to have a goodness-of-fit of 95%. The model estimate was then used to tune a PID controller. Two control schemes were proposed to eliminate the hysteresis behavior present in the MR fluid device. One method uses a traditional force feedback control loop and the other is based on measuring the magnetic field using a Hall-effect sensor embedded within the device. The Hall-effect sensor scheme was found to be superior in terms of cost, simplicity and real-time control performance compared to the force control strategy.

  5. Application requirements for Robotic Nursing Assistants in hospital environments

    NASA Astrophysics Data System (ADS)

    Cremer, Sven; Doelling, Kris; Lundberg, Cody L.; McNair, Mike; Shin, Jeongsik; Popa, Dan

    2016-05-01

    In this paper we report on analysis toward identifying design requirements for an Adaptive Robotic Nursing Assistant (ARNA). Specifically, the paper focuses on application requirements for ARNA, envisioned as a mobile assistive robot that can navigate hospital environments to perform chores in roles such as patient sitter and patient walker. The role of a sitter is primarily related to patient observation from a distance, and fetching objects at the patient's request, while a walker provides physical assistance for ambulation and rehabilitation. The robot will be expected to not only understand nurse and patient intent but also close the decision loop by automating several routine tasks. As a result, the robot will be equipped with sensors such as distributed pressure sensitive skins, 3D range sensors, and so on. Modular sensor and actuator hardware configured in the form of several multi-degree-of-freedom manipulators, and a mobile base are expected to be deployed in reconfigurable platforms for physical assistance tasks. Furthermore, adaptive human-machine interfaces are expected to play a key role, as they directly impact the ability of robots to assist nurses in a dynamic and unstructured environment. This paper discusses required tasks for the ARNA robot, as well as sensors and software infrastructure to carry out those tasks in the aspects of technical resource availability, gaps, and needed experimental studies.

  6. Embedded diagnostic, prognostic, and health management system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Barajas, Leandro G. (Inventor); Strawser, Philip A (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot with multiple compliant joints, each moveable using one or more of the actuators, and having sensors for measuring control and feedback data. A distributed controller controls the joints and other integrated system components over multiple high-speed communication networks. Diagnostic, prognostic, and health management (DPHM) modules are embedded within the robot at the various control levels. Each DPHM module measures, controls, and records DPHM data for the respective control level/connected device in a location that is accessible over the networks or via an external device. A method of controlling the robot includes embedding a plurality of the DPHM modules within multiple control levels of the distributed controller, using the DPHM modules to measure DPHM data within each of the control levels, and recording the DPHM data in a location that is accessible over at least one of the high-speed communication networks.

  7. A technical challenge for robot-assisted minimally invasive surgery: precision surgery on soft tissue.

    PubMed

    Stallkamp, J; Schraft, R D

    2005-01-01

    In minimally invasive surgery, a higher degree of accuracy is required by surgeons both for current and for future applications. This could be achieved using either a manipulator or a robot which would undertake selected tasks during surgery. However, a manually-controlled manipulator cannot fully exploit the maximum accuracy and feasibility of three-dimensional motion sequences. Therefore, apart from being used to perform simple positioning tasks, manipulators will probably be replaced by robot systems more and more in the future. However, in order to use a robot, accurate, up-to-date and extensive data is required which cannot yet be acquired by typical sensors such as CT, MRI, US or common x-ray machines. This paper deals with a new sensor and a concept for its application in robot-assisted minimally invasive surgery on soft tissue which could be a solution for data acquisition in future. Copyright 2005 Robotic Publications Ltd.

  8. Linear Temporal Logic (LTL) Based Monitoring of Smart Manufacturing Systems.

    PubMed

    Heddy, Gerald; Huzaifa, Umer; Beling, Peter; Haimes, Yacov; Marvel, Jeremy; Weiss, Brian; LaViers, Amy

    2015-01-01

    The vision of Smart Manufacturing Systems (SMS) includes collaborative robots that can adapt to a range of scenarios. This vision requires a classification of multiple system behaviors, or sequences of movement, that can achieve the same high-level tasks. Likewise, this vision presents unique challenges regarding the management of environmental variables in concert with discrete, logic-based programming. Overcoming these challenges requires targeted performance and health monitoring of both the logical controller and the physical components of the robotic system. Prognostics and health management (PHM) defines a field of techniques and methods that enable condition-monitoring, diagnostics, and prognostics of physical elements, functional processes, overall systems, etc. PHM is warranted in this effort given that the controller is vulnerable to program changes, which propagate in unexpected ways, logical runtime exceptions, sensor failure, and even bit rot. The physical component's health is affected by the wear and tear experienced by machines constantly in motion. The controller's source of faults is inherently discrete, while the latter occurs in a manner that builds up continuously over time. Such a disconnect poses unique challenges for PHM. This paper presents a robotic monitoring system that captures and resolves this disconnect. This effort leverages supervisory robotic control and model checking with linear temporal logic (LTL), presenting them as a novel monitoring system for PHM. This methodology has been demonstrated in a MATLAB-based simulator for an industry inspired use-case in the context of PHM. Future work will use the methodology to develop adaptive, intelligent control strategies to evenly distribute wear on the joints of the robotic arms, maximizing the life of the system.

  9. Passive Infrared (PIR)-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm.

    PubMed

    Yang, Dan; Xu, Bin; Rao, Kaiyou; Sheng, Weihua

    2018-01-24

    Indoor occupants' positions are significant for smart home service systems, which usually consist of robot service(s), appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans' position in indoor environments based on passive infrared (PIR) sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization.

  10. Passive Infrared (PIR)-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm

    PubMed Central

    Yang, Dan; Xu, Bin; Rao, Kaiyou; Sheng, Weihua

    2018-01-01

    Indoor occupants’ positions are significant for smart home service systems, which usually consist of robot service(s), appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans’ position in indoor environments based on passive infrared (PIR) sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization. PMID:29364188

  11. A fuzzy logic controller for an autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Yen, John; Pfluger, Nathan

    1993-01-01

    The ability of a mobile robot system to plan and move intelligently in a dynamic system is needed if robots are to be useful in areas other than controlled environments. An example of a use for this system is to control an autonomous mobile robot in a space station, or other isolated area where it is hard or impossible for human life to exist for long periods of time (e.g., Mars). The system would allow the robot to be programmed to carry out the duties normally accomplished by a human being. Some of the duties that could be accomplished include operating instruments, transporting objects, and maintenance of the environment. The main focus of our early work has been on developing a fuzzy controller that takes a path and adapts it to a given environment. The robot only uses information gathered from the sensors, but retains the ability to avoid dynamically placed obstacles near and along the path. Our fuzzy logic controller is based on the following algorithm: (1) determine the desired direction of travel; (2) determine the allowed direction of travel; and (3) combine the desired and allowed directions in order to determine a direciton that is both desired and allowed. The desired direction of travel is determined by projecting ahead to a point along the path that is closer to the goal. This gives a local direction of travel for the robot and helps to avoid obstacles.

  12. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  13. Planar and finger-shaped optical tactile sensors for robotic applications

    NASA Technical Reports Server (NTRS)

    Begej, Stefan

    1988-01-01

    Progress is described regarding the development of optical tactile sensors specifically designed for application to dexterous robotics. These sensors operate on optical principles involving the frustration of total internal reflection at a waveguide/elastomer interface and produce a grey-scale tactile image that represents the normal (vertical) forces of contact. The first tactile sensor discussed is a compact, 32 x 32 planar sensor array intended for mounting on a parallel-jaw gripper. Optical fibers were employed to convey the tactile image to a CCD camera and microprocessor-based image analysis system. The second sensor had the shape and size of a human fingertip and was designed for a dexterous robotic hand. It contained 256 sensing sites (taxels) distributed in a dual-density pattern that included a tactile fovea near the tip measuring 13 x 13 mm and containing 169 taxels. The design and construction details of these tactile sensors are presented, in addition to photographs of tactile imprints.

  14. A Neural Network Approach for Building An Obstacle Detection Model by Fusion of Proximity Sensors Data

    PubMed Central

    Peralta, Emmanuel; Vargas, Héctor; Hermosilla, Gabriel

    2018-01-01

    Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calibration process of this kind of sensor could be a time-consuming task because it is usually done by identification in a manual and repetitive way. The resulting obstacles detection models are usually nonlinear functions that can be different for each proximity sensor attached to the robot. In addition, the model is highly dependent on the type of sensor (e.g., ultrasonic or infrared), on changes in light intensity, and on the properties of the obstacle such as shape, colour, and surface texture, among others. That is why in some situations it could be useful to gather all the measurements provided by different kinds of sensor in order to build a unique model that estimates the distances to the obstacles around the robot. This paper presents a novel approach to get an obstacles detection model based on the fusion of sensors data and automatic calibration by using artificial neural networks. PMID:29495338

  15. Initial experiments on the end-point control of a flexible one-link robot

    NASA Technical Reports Server (NTRS)

    Cannon, R. H., Jr.; Schmitz, E.

    1984-01-01

    The present investigation is concerned with initial experiments regarding a specific unsolved control problem which appeared to be central to advances in the art of robotics. This problem involves the control of a flexible member (one link of a robot system). The position of the end-effector, called the end point or tip, is controlled by measuring that position and using the measurement as a basis for applying control torque to the other end of the flexible member, as for instance, the robot's elbow joint. A description is presented of the features of the first experimental arm which has been made, and an outline is provided of the general strategy for controlling it using its tip sensor and shoulder torquer.

  16. Active Sensing System with In Situ Adjustable Sensor Morphology

    PubMed Central

    Nurzaman, Surya G.; Culha, Utku; Brodbeck, Luzius; Wang, Liyu; Iida, Fumiya

    2013-01-01

    Background Despite the widespread use of sensors in engineering systems like robots and automation systems, the common paradigm is to have fixed sensor morphology tailored to fulfill a specific application. On the other hand, robotic systems are expected to operate in ever more uncertain environments. In order to cope with the challenge, it is worthy of note that biological systems show the importance of suitable sensor morphology and active sensing capability to handle different kinds of sensing tasks with particular requirements. Methodology This paper presents a robotics active sensing system which is able to adjust its sensor morphology in situ in order to sense different physical quantities with desirable sensing characteristics. The approach taken is to use thermoplastic adhesive material, i.e. Hot Melt Adhesive (HMA). It will be shown that the thermoplastic and thermoadhesive nature of HMA enables the system to repeatedly fabricate, attach and detach mechanical structures with a variety of shape and size to the robot end effector for sensing purposes. Via active sensing capability, the robotic system utilizes the structure to physically probe an unknown target object with suitable motion and transduce the arising physical stimuli into information usable by a camera as its only built-in sensor. Conclusions/Significance The efficacy of the proposed system is verified based on two results. Firstly, it is confirmed that suitable sensor morphology and active sensing capability enables the system to sense different physical quantities, i.e. softness and temperature, with desirable sensing characteristics. Secondly, given tasks of discriminating two visually indistinguishable objects with respect to softness and temperature, it is confirmed that the proposed robotic system is able to autonomously accomplish them. The way the results motivate new research directions which focus on in situ adjustment of sensor morphology will also be discussed. PMID:24416094

  17. Virtual collaborative environments: programming and controlling robotic devices remotely

    NASA Astrophysics Data System (ADS)

    Davies, Brady R.; McDonald, Michael J., Jr.; Harrigan, Raymond W.

    1995-12-01

    This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.

  18. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  19. Hair-based sensors for micro-autonomous systems

    NASA Astrophysics Data System (ADS)

    Sadeghi, Mahdi M.; Peterson, Rebecca L.; Najafi, Khalil

    2012-06-01

    We seek to harness microelectromechanical systems (MEMS) technologies to build biomimetic devices for low-power, high-performance, robust sensors and actuators on micro-autonomous robot platforms. Hair is used abundantly in nature for a variety of functions including balance and inertial sensing, flow sensing and aerodynamic (air foil) control, tactile and touch sensing, insulation and temperature control, particle filtering, and gas/chemical sensing. Biological hairs, which are typically characterized by large surface/volume ratios and mechanical amplification of movement, can be distributed in large numbers over large areas providing unprecedented sensitivity, redundancy, and stability (robustness). Local neural transduction allows for space- and power-efficient signal processing. Moreover by varying the hair structure and transduction mechanism, the basic hair form can be used for a wide diversity of functions. In this paper, by exploiting a novel wafer-level, bubble-free liquid encapsulation technology, we make arrays of micro-hydraulic cells capable of electrostatic actuation and hydraulic amplification, which enables high force/high deflection actuation and extremely sensitive detection (sensing) at low power. By attachment of cilia (hair) to the micro-hydraulic cell, air flow sensors with excellent sensitivity (< few cm/s) and dynamic range (> 10 m/s) have been built. A second-generation design has significantly reduced the sensor response time while maintaining sensitivity of about 2 cm/s and dynamic range of more than 15 m/s. These sensors can be used for dynamic flight control of flying robots or for situational awareness in surveillance applications. The core biomimetic technologies developed are applicable to a broad range of sensors and actuators.

  20. Development of haptic system for surgical robot

    NASA Astrophysics Data System (ADS)

    Gang, Han Gyeol; Park, Jiong Min; Choi, Seung-Bok; Sohn, Jung Woo

    2017-04-01

    In this paper, a new type of haptic system for surgical robot application is proposed and its performances are evaluated experimentally. The proposed haptic system consists of an effective master device and a precision slave robot. The master device has 3-DOF rotational motion as same as human wrist motion. It has lightweight structure with a gyro sensor and three small-sized MR brakes for position measurement and repulsive torque generation, respectively. The slave robot has 3-DOF rotational motion using servomotors, five bar linkage and a torque sensor is used to measure resistive torque. It has been experimentally demonstrated that the proposed haptic system has good performances on tracking control of desired position and repulsive torque. It can be concluded that the proposed haptic system can be effectively applied to the surgical robot system in real field.

  1. A Sit-to-Stand Training Robot and Its Performance Evaluation: Dynamic Analysis in Lower Limb Rehabilitation Activities

    NASA Astrophysics Data System (ADS)

    Cao, Enguo; Inoue, Yoshio; Liu, Tao; Shibata, Kyoko

    In many countries in which the phenomenon of population aging is being experienced, motor function recovery activities have aroused much interest. In this paper, a sit-to-stand rehabilitation robot utilizing a double-rope system was developed, and the performance of the robot was evaluated by analyzing the dynamic parameters of human lower limbs. For the robot control program, an impedance control method with a training game was developed to increase the effectiveness and frequency of rehabilitation activities, and a calculation method was developed for evaluating the joint moments of hip, knee, and ankle. Test experiments were designed, and four subjects were requested to stand up from a chair with assistance from the rehabilitation robot. In the experiments, body segment rotational angles, trunk movement trajectories, rope tensile forces, ground reaction forces (GRF) and centers of pressure (COP) were measured by sensors, and the moments of ankle, knee and hip joint were real-time calculated using the sensor-measured data. The experiment results showed that the sit-to-stand rehabilitation robot with impedance control method could maintain the comfortable training postures of users, decrease the moments of limb joints, and enhance training effectiveness. Furthermore, the game control method could encourage collaboration between the brain and limbs, and allow for an increase in the frequency and intensity of rehabilitation activities.

  2. Comparison of tongue interface with keyboard for control of an assistive robotic arm.

    PubMed

    Struijk, Lotte N S Andreasen; Lontis, Romulus

    2017-07-01

    This paper demonstrates how an assistive 6 DoF robotic arm with a gripper can be controlled manually using a tongue interface. The proposed method suggests that it possible for a user to manipulate the surroundings with his or her tongue using the inductive tongue control system as deployed in this study. The sensors of an inductive tongue-computer interface were mapped to the Cartesian control of an assistive robotic arm. The resulting control system was tested manually in order to compare manual control of the robot using a standard keyboard and using the tongue interface. Two healthy subjects controlled the robotic arm to precisely move a bottle of water from one location to another. The results shows that the tongue interface was able to fully control the robotic arm in a similar manner as the standard keyboard resulting in the same number of successful manipulations and an average increase in task duration of up to 30% as compared with the standard keyboard.

  3. Macrobend optical sensing for pose measurement in soft robot arms

    NASA Astrophysics Data System (ADS)

    Sareh, Sina; Noh, Yohan; Li, Min; Ranzani, Tommaso; Liu, Hongbin; Althoefer, Kaspar

    2015-12-01

    This paper introduces a pose-sensing system for soft robot arms integrating a set of macrobend stretch sensors. The macrobend sensory design in this study consists of optical fibres and is based on the notion that bending an optical fibre modulates the intensity of the light transmitted through the fibre. This sensing method is capable of measuring bending, elongation and compression in soft continuum robots and is also applicable to wearable sensing technologies, e.g. pose sensing in the wrist joint of a human hand. In our arrangement, applied to a cylindrical soft robot arm, the optical fibres for macrobend sensing originate from the base, extend to the tip of the arm, and then loop back to the base. The connectors that link the fibres to the necessary opto-electronics are all placed at the base of the arm, resulting in a simplified overall design. The ability of this custom macrobend stretch sensor to flexibly adapt its configuration allows preserving the inherent softness and compliance of the robot which it is installed on. The macrobend sensing system is immune to electrical noise and magnetic fields, is safe (because no electricity is needed at the sensing site), and is suitable for modular implementation in multi-link soft continuum robotic arms. The measurable light outputs of the proposed stretch sensor vary due to bend-induced light attenuation (macrobend loss), which is a function of the fibre bend radius as well as the number of repeated turns. The experimental study conducted as part of this research revealed that the chosen bend radius has a far greater impact on the measured light intensity values than the number of turns (if greater than five). Taking into account that the bend radius is the only significantly influencing design parameter, the macrobend stretch sensors were developed to create a practical solution to the pose sensing in soft continuum robot arms. Henceforward, the proposed sensing design was benchmarked against an electromagnetic tracking system (NDI Aurora) for validation.

  4. PC/AT-based architecture for shared telerobotic control

    NASA Astrophysics Data System (ADS)

    Schinstock, Dale E.; Faddis, Terry N.; Barr, Bill G.

    1993-03-01

    A telerobotic control system must include teleoperational, shared, and autonomous modes of control in order to provide a robot platform for incorporating the rapid advances that are occurring in telerobotics and associated technologies. These modes along with the ability to modify the control algorithms are especially beneficial for telerobotic control systems used for research purposes. The paper describes an application of the PC/AT platform to the control system of a telerobotic test cell. The paper provides a discussion of the suitability of the PC/AT as a platform for a telerobotic control system. The discussion is based on the many factors affecting the choice of a computer platform for a real time control system. The factors include I/O capabilities, simplicity, popularity, computational performance, and communication with external systems. The paper also includes a description of the actuation, measurement, and sensor hardware of both the master manipulator and the slave robot. It also includes a description of the PC-Bus interface cards. These cards were developed by the researchers in the KAT Laboratory, specifically for interfacing to the master manipulator and slave robot. Finally, a few different versions of the low level telerobotic control software are presented. This software incorporates shared control by supervisory systems and the human operator and traded control between supervisory systems and the human operator.

  5. A New Approach for Human Forearm Motion Assist by Actuated Artificial Joint-An Inner Skeleton Robot

    NASA Astrophysics Data System (ADS)

    Kundu, Subrata Kumar; Kiguchi, Kazuo; Teramoto, Kenbu

    In order to help the physical activities of the elderly or physically disabled persons, we propose a new concept of a power-assist inner skeleton robot (i.e., actuated artificial joint) that is supposed to assist the human daily life motion from inside of the human body. This paper presents an implantable 2 degree of freedom (DOF) inner skeleton robot that is designed to assist human elbow flexion-extension motion and forearm supination-pronation motion for daily life activities. We have developed a prototype of the inner skeleton robot that is supposed to assist the motion from inside of the body and act as an actuated artificial joint. The proposed system is controlled based on the activation patterns of the electromyogram (EMG) signals of the user's muscles by applying fuzzy-neuro control method. A joint actuator with angular position sensor is designed for the inner skeleton robot and a T-Mechanism is proposed to keep the bone arrangement similar to the normal human articulation after the elbow arthroplasty. The effectiveness of the proposed system has been evaluated by experiment.

  6. Development and Control of Multi-Degree-of-Freedom Mobile Robot for Acquisition of Road Environmental Modes

    NASA Astrophysics Data System (ADS)

    Murata, Naoya; Katsura, Seiichiro

    Acquisition of information about the environment around a mobile robot is important for purposes such as controlling the robot from a remote location and in situations such as that when the robot is running autonomously. In many researches, audiovisual information is used. However, acquisition of information about force sensation, which is included in environmental information, has not been well researched. The mobile-hapto, which is a remote control system with force information, has been proposed, but the robot used for the system can acquire only the horizontal component of forces. For this reason, in this research, a three-wheeled mobile robot that consists of seven actuators was developed and its control system was constructed. It can get information on horizontal and vertical forces without using force sensors. By using this robot, detailed information on the forces in the environment can be acquired and the operability of the robot and its capability to adjust to the environment are expected to improve.

  7. Autonomous stair-climbing with miniature jumping robots.

    PubMed

    Stoeter, Sascha A; Papanikolopoulos, Nikolaos

    2005-04-01

    The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.

  8. Development of intelligent robots - Achievements and issues

    NASA Astrophysics Data System (ADS)

    Nitzan, D.

    1985-03-01

    A flexible, intelligent robot is regarded as a general purpose machine system that may include effectors, sensors, computers, and auxiliary equipment and, like a human, can perform a variety of tasks under unpredictable conditions. Development of intelligent robots is essential for increasing the growth rate of today's robot population in industry and elsewhere. Robotics research and development topics include manipulation, end effectors, mobility, sensing (noncontact and contact), adaptive control, robot programming languages, and manufacturing process planning. Past achievements and current issues related to each of these topics are described briefly.

  9. Magneto-inductive skin sensor for robot collision avoidance: A new development

    NASA Technical Reports Server (NTRS)

    Chauhan, D. S.; Dehoff, Paul H.

    1989-01-01

    Safety is a primary concern for robots operating in space. The tri-mode sensor addresses that concern by employing a collision avoidance/management skin around the robot arms. This rf-based skin sensor is at present a dual mode (proximity and tactile). The third mode, pyroelectric, will complement the other two. The proximity mode permits the robot to sense an intruding object, to range the object, and to detect the edges of the object. The tactile mode permits the robot to sense when it has contacted an object, where on the arm it has made contact, and provides a three-dimensional image of the shape of the contact impression. The pyroelectric mode will be added to permit the robot arm to detect the proximity of a hot object and to add sensing redundancy to the two other modes. The rf-modes of the sensing skin are presented. These modes employ a highly efficient magnetic material (amorphous metal) in a sensing technique. This results in a flexible sensor array which uses a primarily inductive configuration to permit both capacitive and magnetoinductive sensing of object; thus optimizing performance in both proximity and tactile modes with the same sensing skin. The fundamental operating principles, design particulars, and theoretical models are provided to aid in the description and understanding of this sensor. Test results are also given.

  10. Abstract - Belbas, Nicholas (EC2)

    NASA Technical Reports Server (NTRS)

    Belbas, Nicholas

    2017-01-01

    Originally, I was brought into the Design and Analysis Branch in the Crew and Thermal Systems to work on administrative tasks like archiving and scheduling. However, I ended up splitting my time between secretarial tasks and a technical project. My technical project was originally meant to be a wireless sensor package for the 20ft Spacecraft Thermal Vacuum Chamber in the B7 High Bay. I would be using a miniature wifi development board and a temperature/humidity sensor along with custom 3D modeling to accomplish this. However, after some discussion with my technical mentor, the plan was changed to a mobile autonomous self-charging sensor platform. A mobile platform will allow the sensors to be moved around without depressurizing the chamber. Also, the self-charging aspect of the package allows for almost unlimited time in the chamber. If the on-board battery runs low, the robot can easily be driven to its charging dock and continue to transmit while charging. The driving base is based around a Raspberry Pi 3 board with a 12C PMW DC Motor controller and a PWM controller driving two small gear motors. The sensor transmitter itself is a RHT03 temperature and humidity sensor and Cozir CO2 sensor connected to an ESP8266 Huzzah board. The power distribution system utilizes a pair of 3.7v 3600mah lipo batteries wired to Powerboost 500 boards. Also, the self-charging mechanism utilizes two 12v-max inductive charging coils wired into the same Powerboost boards as the battery. The Raspberry pi is running Python 3.3 for the driving base and Javascript MJPEG library for transmitting live video from the onboard camera. The sensor package is running Arduino-based C++ and the program capturing the data is running PyqtGraph Python and HTML. The shell of the robot itself is a 3D printed case that will (work in progress) snap together. The photo to the left shows the two halves separated from each other. The black shell contains the power distribution boards and connectors while the white shell contains the driving base and data systems.

  11. Laser speckle velocimetry for robot manufacturing

    NASA Astrophysics Data System (ADS)

    Charrett, Thomas O. H.; Bandari, Yashwanth K.; Michel, Florent; Ding, Jialuo; Williams, Stewart W.; Tatam, Ralph P.

    2017-06-01

    A non-contact speckle correlation sensor for the measurement of robotic tool speed is presented for use in robotic manufacturing and is capable of measuring the in-plane relative velocities between a robot end-effector and the workpiece or other surface. The sensor performance was assessed in the laboratory with the sensor accuracies found to be better than 0:01 mm/s over a 70 mm/s velocity range. Finally an example of the sensors application to robotic manufacturing is presented where the sensor was applied to tool speed measurement for path planning in the wire and arc additive manufacturing process using a KUKA KR150 L110/2 industrial robot.

  12. Miniature Six-Axis Load Sensor for Robotic Fingertip

    NASA Technical Reports Server (NTRS)

    Diftler, Myron A.; Martin, Toby B.; Valvo, Michael C.; Rodriguez, Dagoberto; Chu, Mars W.

    2009-01-01

    A miniature load sensor has been developed as a prototype of tactile sensors that could fit within fingertips of anthropomorphic robot hands. The sensor includes a force-and-torque transducer in the form of a spring instrumented with at least six semiconductor strain gauges. The strain-gauge wires are secured to one side of an interface circuit board mounted at the base of the spring. This board protects the strain-gauge wires from damage that could otherwise occur as a result of finger motions. On the opposite side of the interface board, cables routed along the neutral axis of the finger route the strain-gauge output voltages to an analog-to-digital converter (A/D) board. The A/D board is mounted as close as possible to the strain gauges to minimize electromagnetic noise and other interference effects. The outputs of the A/D board are fed to a controller, wherein, by means of a predetermined calibration matrix, the digitized strain-gauge output voltages are converted to three vector components of force and three of torque exerted by or on the fingertip.

  13. Overview of Fiber-Optical Sensors

    NASA Technical Reports Server (NTRS)

    Depaula, Ramon P.; Moore, Emery L.

    1987-01-01

    Design, development, and sensitivity of sensors using fiber optics reviewed. State-of-the-art and probable future developments of sensors using fiber optics described in report including references to work in field. Serves to update previously published surveys. Systems incorporating fiber-optic sensors used in medical diagnosis, navigation, robotics, sonar, power industry, and industrial controls.

  14. Investigation of the relative orientation of the system of optical sensors to monitor the technosphere objects

    NASA Astrophysics Data System (ADS)

    Petrochenko, Andrey; Konyakhin, Igor

    2017-06-01

    In connection with the development of robotics have become increasingly popular variety of three-dimensional reconstruction of the system mapping and image-set received from the optical sensors. The main objective of technical and robot vision is the detection, tracking and classification of objects of the space in which these systems and robots operate [15,16,18]. Two-dimensional images sometimes don't contain sufficient information to address those or other problems: the construction of the map of the surrounding area for a route; object identification, tracking their relative position and movement; selection of objects and their attributes to complement the knowledge base. Three-dimensional reconstruction of the surrounding space allows you to obtain information on the relative positions of objects, their shape, surface texture. Systems, providing training on the basis of three-dimensional reconstruction of the results of the comparison can produce two-dimensional images of three-dimensional model that allows for the recognition of volume objects on flat images. The problem of the relative orientation of industrial robots with the ability to build threedimensional scenes of controlled surfaces is becoming actual nowadays.

  15. Integration of robotic resources into FORCEnet

    NASA Astrophysics Data System (ADS)

    Nguyen, Chinh; Carroll, Daniel; Nguyen, Hoa

    2006-05-01

    The Networked Intelligence, Surveillance, and Reconnaissance (NISR) project integrates robotic resources into Composeable FORCEnet to control and exploit unmanned systems over extremely long distances. The foundations are built upon FORCEnet-the U.S. Navy's process to define C4ISR for net-centric operations-and the Navy Unmanned Systems Common Control Roadmap to develop technologies and standards for interoperability, data sharing, publish-and-subscribe methodology, and software reuse. The paper defines the goals and boundaries for NISR with focus on the system architecture, including the design tradeoffs necessary for unmanned systems in a net-centric model. Special attention is given to two specific scenarios demonstrating the integration of unmanned ground and water surface vehicles into the open-architecture web-based command-and-control information-management system of Composeable FORCEnet. Planned spiral development for NISR will improve collaborative control, expand robotic sensor capabilities, address multiple domains including underwater and aerial platforms, and extend distributive communications infrastructure for battlespace optimization for unmanned systems in net-centric operations.

  16. Decentralized reinforcement-learning control and emergence of motion patterns

    NASA Astrophysics Data System (ADS)

    Svinin, Mikhail; Yamada, Kazuyaki; Okhura, Kazuhiro; Ueda, Kanji

    1998-10-01

    In this paper we propose a system for studying emergence of motion patterns in autonomous mobile robotic systems. The system implements an instance-based reinforcement learning control. Three spaces are of importance in formulation of the control scheme. They are the work space, the sensor space, and the action space. Important feature of our system is that all these spaces are assumed to be continuous. The core part of the system is a classifier system. Based on the sensory state space analysis, the control is decentralized and is specified at the lowest level of the control system. However, the local controllers are implicitly connected through the perceived environment information. Therefore, they constitute a dynamic environment with respect to each other. The proposed control scheme is tested under simulation for a mobile robot in a navigation task. It is shown that some patterns of global behavior--such as collision avoidance, wall-following, light-seeking--can emerge from the local controllers.

  17. Adaptive multisensor fusion for planetary exploration rovers

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri

    1992-01-01

    The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.

  18. Certainty grids for mobile robots

    NASA Technical Reports Server (NTRS)

    Moravec, H. P.

    1987-01-01

    A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.

  19. Adaptation of sensor morphology: an integrative view of perception from biologically inspired robotics perspective

    PubMed Central

    Nurzaman, Surya G.

    2016-01-01

    Sensor morphology, the morphology of a sensing mechanism which plays a role of shaping the desired response from physical stimuli from surroundings to generate signals usable as sensory information, is one of the key common aspects of sensing processes. This paper presents a structured review of researches on bioinspired sensor morphology implemented in robotic systems, and discusses the fundamental design principles. Based on literature review, we propose two key arguments: first, owing to its synthetic nature, biologically inspired robotics approach is a unique and powerful methodology to understand the role of sensor morphology and how it can evolve and adapt to its task and environment. Second, a consideration of an integrative view of perception by looking into multidisciplinary and overarching mechanisms of sensor morphology adaptation across biology and engineering enables us to extract relevant design principles that are important to extend our understanding of the unfinished concepts in sensing and perception. PMID:27499843

  20. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller.

    PubMed

    Lopez-Franco, Carlos; Gomez-Avila, Javier; Alanis, Alma Y; Arana-Daniel, Nancy; Villaseñor, Carlos

    2017-08-12

    In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results.

  1. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller

    PubMed Central

    Lopez-Franco, Carlos; Alanis, Alma Y.; Arana-Daniel, Nancy; Villaseñor, Carlos

    2017-01-01

    In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results. PMID:28805689

  2. Astrobee: Space Station Robotic Free Flyer

    NASA Technical Reports Server (NTRS)

    Provencher, Chris; Bualat, Maria G.; Barlow, Jonathan; Fong, Terrence W.; Smith, Marion F.; Smith, Ernest E.; Sanchez, Hugo S.

    2016-01-01

    Astrobee is a free flying robot that will fly inside the International Space Station and primarily serve as a research platform for robotics in zero gravity. Astrobee will also provide mobile camera views to ISS flight and payload controllers, and collect various sensor data within the ISS environment for the ISS Program. Astrobee consists of two free flying robots, a dock, and ground data system. This presentation provides an overview, high level design description, and project status.

  3. Fully decentralized estimation and control for a modular wheeled mobile robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mutambara, A.G.O.; Durrant-Whyte, H.F.

    2000-06-01

    In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less

  4. Laser vision seam tracking system based on image processing and continuous convolution operator tracker

    NASA Astrophysics Data System (ADS)

    Zou, Yanbiao; Chen, Tao

    2018-06-01

    To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.

  5. Robot for Investigations and Assessments of Nuclear Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanaan, Daniel; Dogny, Stephane

    RIANA is a remote controlled Robot dedicated for Investigations and Assessments of Nuclear Areas. The development of RIANA is motivated by the need to have at disposal a proven robot, tested in hot cells; a robot capable of remotely investigate and characterise the inside of nuclear facilities in order to collect efficiently all the required data in the shortest possible time. It is based on a wireless medium sized remote carrier that may carry a wide variety of interchangeable modules, sensors and tools. It is easily customised to match specific requirements and quickly configured depending on the mission and themore » operator's preferences. RIANA integrates localisation and navigation systems. The robot will be able to generate / update a 2D map of its surrounding and exploring areas. The position of the robot is given accurately on the map. Furthermore, the robot will be able to autonomously calculate, define and follow a trajectory between 2 points taking into account its environment and obstacles. The robot is configurable to manage obstacles and restrict access to forbidden areas. RIANA allows an advanced control of modules, sensors and tools; all collected data (radiological and measured data) are displayed in real time in different format (chart, on the generated map...) and stored in a single place so that may be exported in a convenient format for data processing. This modular design gives RIANA the flexibility to perform multiple investigation missions where humans cannot work such as: visual inspections, dynamic localization and 2D mapping, characterizations and nuclear measurements of floor and walls, non destructive testing, samples collection: solid and liquid. The benefits of using RIANA are: - reducing the personnel exposures by limiting the manual intervention time, - minimizing the time and reducing the cost of investigation operations, - providing critical inputs to set up and optimize cleanup and dismantling operations. (authors)« less

  6. Autonomous mobile robotic system for supporting counterterrorist and surveillance operations

    NASA Astrophysics Data System (ADS)

    Adamczyk, Marek; Bulandra, Kazimierz; Moczulski, Wojciech

    2017-10-01

    Contemporary research on mobile robots concerns applications to counterterrorist and surveillance operations. The goal is to develop systems that are capable of supporting the police and special forces by carrying out such operations. The paper deals with a dedicated robotic system for surveillance of large objects such as airports, factories, military bases, and many others. The goal is to trace unauthorised persons who try to enter to the guarded area, document the intrusion and report it to the surveillance centre, and then warn the intruder by sound messages and eventually subdue him/her by stunning through acoustic effect of great power. The system consists of several parts. An armoured four-wheeled robot assures required mobility of the system. The robot is equipped with a set of sensors including 3D mapping system, IR and video cameras, and microphones. It communicates with the central control station (CCS) by means of a wideband wireless encrypted system. A control system of the robot can operate autonomously, and under remote control. In the autonomous mode the robot follows the path planned by the CCS. Once an intruder has been detected, the robot can adopt its plan to allow tracking him/her. Furthermore, special procedures of treatment of the intruder are applied including warning about the breach of the border of the protected area, and incapacitation of an appropriately selected very loud sound until a patrol of guards arrives. Once getting stuck the robot can contact the operator who can remotely solve the problem the robot is faced with.

  7. The procedure safety system

    NASA Technical Reports Server (NTRS)

    Obrien, Maureen E.

    1990-01-01

    Telerobotic operations, whether under autonomous or teleoperated control, require a much more sophisticated safety system than that needed for most industrial applications. Industrial robots generally perform very repetitive tasks in a controlled, static environment. The safety system in that case can be as simple as shutting down the robot if a human enters the work area, or even simply building a cage around the work space. Telerobotic operations, however, will take place in a dynamic, sometimes unpredictable environment, and will involve complicated and perhaps unrehearsed manipulations. This creates a much greater potential for damage to the robot or objects in its vicinity. The Procedural Safety System (PSS) collects data from external sensors and the robot, then processes it through an expert system shell to determine whether an unsafe condition or potential unsafe condition exists. Unsafe conditions could include exceeding velocity, acceleration, torque, or joint limits, imminent collision, exceeding temperature limits, and robot or sensor component failure. If a threat to safety exists, the operator is warned. If the threat is serious enough, the robot is halted. The PSS, therefore, uses expert system technology to enhance safety thus reducing operator work load, allowing him/her to focus on performing the task at hand without the distraction of worrying about violating safety criteria.

  8. Supervisory control of mobile sensor networks: math formulation, simulation, and implementation.

    PubMed

    Giordano, Vincenzo; Ballal, Prasanna; Lewis, Frank; Turchiano, Biagio; Zhang, Jing Bing

    2006-08-01

    This paper uses a novel discrete-event controller (DEC) for the coordination of cooperating heterogeneous wireless sensor networks (WSNs) containing both unattended ground sensors (UGSs) and mobile sensor robots. The DEC sequences the most suitable tasks for each agent and assigns sensor resources according to the current perception of the environment. A matrix formulation makes this DEC particularly useful for WSN, where missions change and sensor agents may be added or may fail. WSN have peculiarities that complicate their supervisory control. Therefore, this paper introduces several new tools for DEC design and operation, including methods for generating the required supervisory matrices based on mission planning, methods for modifying the matrices in the event of failed nodes, or nodes entering the network, and a novel dynamic priority assignment weighting approach for selecting the most appropriate and useful sensors for a given mission task. The resulting DEC represents a complete dynamical description of the WSN system, which allows a fast programming of deployable WSN, a computer simulation analysis, and an efficient implementation. The DEC is actually implemented on an experimental wireless-sensor-network prototyping system. Both simulation and experimental results are presented to show the effectiveness and versatility of the developed control architecture.

  9. Finger-Shaped GelForce: Sensor for Measuring Surface Traction Fields for Robotic Hand.

    PubMed

    Sato, K; Kamiyama, K; Kawakami, N; Tachi, S

    2010-01-01

    It is believed that the use of haptic sensors to measure the magnitude, direction, and distribution of a force will enable a robotic hand to perform dexterous operations. Therefore, we develop a new type of finger-shaped haptic sensor using GelForce technology. GelForce is a vision-based sensor that can be used to measure the distribution of force vectors, or surface traction fields. The simple structure of the GelForce enables us to develop a compact finger-shaped GelForce for the robotic hand. GelForce that is developed on the basis of an elastic theory can be used to calculate surface traction fields using a conversion equation. However, this conversion equation cannot be analytically solved when the elastic body of the sensor has a complicated shape such as the shape of a finger. Therefore, we propose an observational method and construct a prototype of the finger-shaped GelForce. By using this prototype, we evaluate the basic performance of the finger-shaped GelForce. Then, we conduct a field test by performing grasping operations using a robotic hand. The results of this test show that using the observational method, the finger-shaped GelForce can be successfully used in a robotic hand.

  10. Learning Probabilistic Features for Robotic Navigation Using Laser Sensors

    PubMed Central

    Aznar, Fidel; Pujol, Francisco A.; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N 2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used. PMID:25415377

  11. Learning probabilistic features for robotic navigation using laser sensors.

    PubMed

    Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

  12. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot.

    PubMed

    Alexandrov, Alexei V; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free , scalar equations. This paper investigates whether the EM alternative shows "real-world robustness" against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive ("voluntary") movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  13. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    PubMed Central

    Alexandrov, Alexei V.; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A.; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free, scalar equations. This paper investigates whether the EM alternative shows “real-world robustness” against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive (“voluntary”) movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices. PMID:28487646

  14. Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)

    PubMed Central

    Tang, Hong; Li, Liangzhi; Xiao, Nanfeng

    2017-01-01

    Although many researchers have begun to study the area of Cyber Physical Social Sensing (CPSS), few are focused on robotic sensors. We successfully utilize robots in CPSS, and propose a sensor trajectory planning method in this paper. Trajectory planning is a fundamental problem in mobile robotics. However, traditional methods are not suited for robotic sensors, because of their low efficiency, instability, and non-smooth-generated paths. This paper adopts an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial which can output a smooth trajectory for the robotic sensor. Simulations demonstrate that our approach is robust and efficient, and can be well applied in the CPSS field. PMID:28218649

  15. Three-dimensional sensor system using multistripe laser and stereo camera for environment recognition of mobile robots

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.

    2002-10-01

    In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.

  16. Improving Emergency Response and Human-Robotic Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David I. Gertman; David J. Bruemmer; R. Scott Hartley

    2007-08-01

    Preparedness for chemical, biological, and radiological/nuclear incidents at nuclear power plants (NPPs) includes the deployment of well trained emergency response teams. While teams are expected to do well, data from other domains suggests that the timeliness and accuracy associated with incident response can be improved through collaborative human-robotic interaction. Many incident response scenarios call for multiple, complex procedure-based activities performed by personnel wearing cumbersome personal protective equipment (PPE) and operating under high levels of stress and workload. While robotic assistance is postulated to reduce workload and exposure, limitations associated with communications and the robot’s ability to act independently have servedmore » to limit reliability and reduce our potential to exploit human –robotic interaction and efficacy of response. Recent work at the Idaho National Laboratory (INL) on expanding robot capability has the potential to improve human-system response during disaster management and recovery. Specifically, increasing the range of higher level robot behaviors such as autonomous navigation and mapping, evolving new abstractions for sensor and control data, and developing metaphors for operator control have the potential to improve state-of-the-art in incident response. This paper discusses these issues and reports on experiments underway intelligence residing on the robot to enhance emergency response.« less

  17. Multiagent robotic systems' ambient light sensor

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Maslennikov, Oleg S.; Komarov, Igor I.

    2017-05-01

    Swarm robotics is one of the fastest growing areas of modern technology. Being subclass of multi-agent systems it inherits the main part of scientific-methodological apparatus of construction and functioning of practically useful complexes, which consist of rather autonomous independent agents. Ambient light sensors (ALS) are widely used in robotics. But speaking about swarm robotics, the technology which has great number of specific features and is developing, we can't help mentioning that its important to use sensors on each robot not only in order to help it to get directionally oriented, but also to follow light emitted by robot-chief or to help to find the goal easier. Key words: ambient light sensor, swarm system, multiagent system, robotic system, robotic complexes, simulation modelling

  18. Development of the Research Platform of Small Autonomous Blimp Robot

    NASA Astrophysics Data System (ADS)

    Takaya, Toshihiko; Kawamura, Hidenori; Yamamoto, Masahito; Ohuchi, Azuma

    A blimp robot is attractive as an small flight robot and can float in the air by buoyancy and realize safe to the crash small flight with low energy and can movement for a long time compared with other flight robots with low energy and can movement for a long time compared with other flight robots. However, control of an airplane robot is difficult for the nonlinear characteristic exposed to inertia by the air flow in response to influence. Therefore, the applied research which carried out the maximum use of such in recent years a blimp robot's feature is prosperous. In this paper, we realized development of blimp robot for research which can be used general-purpose by carrying out clue division of the blimp robot body at a unit, and constituting and building for research of blimp robot, and application development. On the other hand, by developing a general-purpose blimp robot research platform, improvement in the research efficiency of many researchers can be attained, and further, research start of blimp robot becomes easy and contributes to development of research. We performed the experiments for the above-mentioned proof. 1. Checked basic keeping position performance and that various orbital operation was possible. And the unit exchange ease of software unit was checked by the experiment which exchanges the control layer of software for learning control from PID control, and carries out comparison of operation. 2. In order to check the exchange ease of hardware unit, the sensor was exchanged for the microphon from the camera, and control of operation was checked. 3. For the unit addition ease, the microphon which carries out sound detection with the picture detection with a camera was added, and control of operation was verified. 4. The unit exchange was carried out for the check of a function addition and the topological map generation experiment by addition of an ultrasonic sensor was conducted. Developed blimp robot for research mounted the exchange ease and the additional ease of a unit in hardware using an analog and digital I/F fomenting realized in the combination of the software module of a layered structure in software was performed. Consequently, an addition and exchange of a function were able to become easy and were able to realize the research platform of blimp robot.

  19. An anatomy of industrial robots and their controls

    NASA Astrophysics Data System (ADS)

    Luh, J. Y. S.

    1983-02-01

    The modernization of manufacturing facilities by means of automation represents an approach for increasing productivity in industry. The three existing types of automation are related to continuous process controls, the use of transfer conveyor methods, and the employment of programmable automation for the low-volume batch production of discrete parts. The industrial robots, which are defined as computer controlled mechanics manipulators, belong to the area of programmable automation. Typically, the robots perform tasks of arc welding, paint spraying, or foundary operation. One may assign a robot to perform a variety of job assignments simply by changing the appropriate computer program. The present investigation is concerned with an evaluation of the potential of the robot on the basis of its basic structure and controls. It is found that robots function well in limited areas of industry. If the range of tasks which robots can perform is to be expanded, it is necessary to provide multiple-task sensors, or special tooling, or even automatic tooling.

  20. Trusted Remote Operation of Proximate Emergy Robots (TROOPER): DARPA Robotics Challenge

    DTIC Science & Technology

    2015-12-01

    sensor in each of the robot’s feet. Additionally, there is a 6-axis IMU that sits in the robot’s pelvis cage. While testing before the Finals, the...Services. Many of the controllers in the autonomic layer have overlapping requirements, such as filtered IMU and force torque data from the robot...the following services during the DRC: • IMU Filtering • Force Torque Filtering • Joint State Publishing • TF (Transform) Broadcasting • Robot Pose

  1. Trusted Remote Operation of Proximate Emergency Robots (TROOPER): DARPA Robotics Challenge

    DTIC Science & Technology

    2015-12-01

    sensor in each of the robot’s feet. Additionally, there is a 6-axis IMU that sits in the robot’s pelvis cage. While testing before the Finals, the...Services. Many of the controllers in the autonomic layer have overlapping requirements, such as filtered IMU and force torque data from the robot...the following services during the DRC: • IMU Filtering • Force Torque Filtering • Joint State Publishing • TF (Transform) Broadcasting • Robot Pose

  2. Development of a Wearable Assist Robot for Walk Rehabilitation After Knee Arthroplasty Surgery

    NASA Astrophysics Data System (ADS)

    Terada, H.; Zhu, Y.; Horiguchi, K.; Nakamura, M.; Takahashi, R.

    In Japan, it is popular that the disease knee joints will be replaced to artificial joints by surgery. And we have to assist so many patients for walk rehabilitation. So, the wearable assist robot has been developed. This robot includes the knee motion assist mechanism and the hip joint support mechanism. Especially, the knee motion assist mechanism consists of a non-circular gear and grooved cams. This mechanism rotates and slides simultaneously, which has two degree-of-freedom. Also, the hip joint support mechanism consists of a hip brace and a ball-joint. This mechanism can avoid motion constraints which are the internal or external rotation and the adduction or abduction. Then, the control algorithm, which considers an assisting timing for the walk rehabilitation, has been proposed. A sensing system of a walk state for this control system uses a heel contacts sensor and knee and hip joint rotation angle sensors. Also, the prototype robot has been tested. And it is confirmed that the assisting system is useful.

  3. A natural-language interface to a mobile robot

    NASA Technical Reports Server (NTRS)

    Michalowski, S.; Crangle, C.; Liang, L.

    1987-01-01

    The present work on robot instructability is based on an ongoing effort to apply modern manipulation technology to serve the needs of the handicapped. The Stanford/VA Robotic Aid is a mobile manipulation system that is being developed to assist severely disabled persons (quadriplegics) in performing simple activities of everyday living in a homelike, unstructured environment. It consists of two major components: a nine degree-of-freedom manipulator and a stationary control console. In the work presented here, only the motions of the Robotic Aid's omnidirectional motion base have been considered, i.e., the six degrees of freedom of the arm and gripper have been ignored. The goal has been to develop some basic software tools for commanding the robot's motions in an enclosed room containing a few objects such as tables, chairs, and rugs. In the present work, the environmental model takes the form of a two-dimensional map with objects represented by polygons. Admittedly, such a highly simplified scheme bears little resemblance to the elaborate cognitive models of reality that are used in normal human discourse. In particular, the polygonal model is given a priori and does not contain any perceptual elements: there is no polygon sensor on board the mobile robot.

  4. Stretchable, Flexible, Scalable Smart Skin Sensors for Robotic Position and Force Estimation.

    PubMed

    O'Neill, John; Lu, Jason; Dockter, Rodney; Kowalewski, Timothy

    2018-03-23

    The design and validation of a continuously stretchable and flexible skin sensor for collaborative robotic applications is outlined. The skin consists of a PDMS skin doped with Carbon Nanotubes and the addition of conductive fabric, connected by only five wires to a simple microcontroller. The accuracy is characterized in position as well as force, and the skin is also tested under uniaxial stretch. There are also two examples of practical implementations in collaborative robotic applications. The stationary position estimate has an RMSE of 7.02 mm, and the sensor error stays within 2.5 ± 1.5 mm even under stretch. The skin consistently provides an emergency stop command at only 0.5 N of force and is shown to maintain a collaboration force of 10 N in a collaborative control experiment.

  5. Controlling Herds of Cooperative Robots

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.

    2006-01-01

    A document poses, and suggests a program of research for answering, questions of how to achieve autonomous operation of herds of cooperative robots to be used in exploration and/or colonization of remote planets. In a typical scenario, a flock of mobile sensory robots would be deployed in a previously unexplored region, one of the robots would be designated the leader, and the leader would issue commands to move the robots to different locations or aim sensors at different targets to maximize scientific return. It would be necessary to provide for this hierarchical, cooperative behavior even in the face of such unpredictable factors as terrain obstacles. A potential-fields approach is proposed as a theoretical basis for developing methods of autonomous command and guidance of a herd. A survival-of-the-fittest approach is suggested as a theoretical basis for selection, mutation, and adaptation of a description of (1) the body, joints, sensors, actuators, and control computer of each robot, and (2) the connectivity of each robot with the rest of the herd, such that the herd could be regarded as consisting of a set of artificial creatures that evolve to adapt to a previously unknown environment. A distributed simulation environment has been developed to test the proposed approaches in the Titan environment. One blimp guides three surface sondes via a potential field approach. The results of the simulation demonstrate that the method used for control is feasible, even if significant uncertainty exists in the dynamics and environmental models, and that the control architecture provides the autonomy needed to enable surface science data collection.

  6. Investigation of human-robot interface performance in household environments

    NASA Astrophysics Data System (ADS)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  7. Robot formation control in stealth mode with scalable team size

    NASA Astrophysics Data System (ADS)

    Yu, Hongjun; Shi, Peng; Lim, Cheng-Chew

    2016-11-01

    In situations where robots need to keep electromagnetic silent in a formation, communication channels become unavailable. Moreover, as passive displacement sensors are used, limited sensing ranges are inevitable due to power insufficiency and limited noise reduction. To address the formation control problem for a scalable team of robots subject to the above restrictions, a flexible strategy is necessary. In this paper, under the assumption that the data transmission among the robots is not available, a novel controller and a protocol are designed that do not rely on communication. As the controller only drives the robots to a partially desired formation, a distributed coordination protocol is proposed to resolve the imperfections. It is shown that the effectiveness of the controller and the protocol rely on the formation connectivity, and a condition is given on the sensing range. Simulations are conducted to illustrate the feasibility and advantages of the new design scheme developed.

  8. Analysis and experimental kinematics of a skid-steering wheeled robot based on a laser scanner sensor.

    PubMed

    Wang, Tianmiao; Wu, Yao; Liang, Jianhong; Han, Chenhao; Chen, Jiao; Zhao, Qiteng

    2015-04-24

    Skid-steering mobile robots are widely used because of their simple mechanism and robustness. However, due to the complex wheel-ground interactions and the kinematic constraints, it is a challenge to understand the kinematics and dynamics of such a robotic platform. In this paper, we develop an analysis and experimental kinematic scheme for a skid-steering wheeled vehicle based-on a laser scanner sensor. The kinematics model is established based on the boundedness of the instantaneous centers of rotation (ICR) of treads on the 2D motion plane. The kinematic parameters (the ICR coefficient , the path curvature variable and robot speed ), including the effect of vehicle dynamics, are introduced to describe the kinematics model. Then, an exact but costly dynamic model is used and the simulation of this model's stationary response for the vehicle shows a qualitative relationship for the specified parameters and . Moreover, the parameters of the kinematic model are determined based-on a laser scanner localization experimental analysis method with a skid-steering robotic platform, Pioneer P3-AT. The relationship between the ICR coefficient and two physical factors is studied, i.e., the radius of the path curvature and the robot speed . An empirical function-based relationship between the ICR coefficient of the robot and the path parameters is derived. To validate the obtained results, it is empirically demonstrated that the proposed kinematics model significantly improves the dead-reckoning performance of this skid-steering robot.

  9. Biosleeve Human-Machine Interface

    NASA Technical Reports Server (NTRS)

    Assad, Christopher (Inventor)

    2016-01-01

    Systems and methods for sensing human muscle action and gestures in order to control machines or robotic devices are disclosed. One exemplary system employs a tight fitting sleeve worn on a user arm and including a plurality of electromyography (EMG) sensors and at least one inertial measurement unit (IMU). Power, signal processing, and communications electronics may be built into the sleeve and control data may be transmitted wirelessly to the controlled machine or robotic device.

  10. Robotic vision. [process control applications

    NASA Technical Reports Server (NTRS)

    Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.

    1979-01-01

    Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.

  11. Relative hardness measurement of soft objects by a new fiber optic sensor

    NASA Astrophysics Data System (ADS)

    Ahmadi, Roozbeh; Ashtaputre, Pranav; Abou Ziki, Jana; Dargahi, Javad; Packirisamy, Muthukumaran

    2010-06-01

    The measurement of relative hardness of soft objects enables replication of human finger tactile perception capabilities. This ability has many applications not only in automation and robotics industry but also in many other areas such as aerospace and robotic surgery where a robotic tool interacts with a soft contact object. One of the practical examples of interaction between a solid robotic instrument and a soft contact object occurs during robotically-assisted minimally invasive surgery. Measuring the relative hardness of bio-tissue, while contacting the robotic instrument, helps the surgeons to perform this type of surgery more reliably. In the present work, a new optical sensor is proposed to measure the relative hardness of contact objects. In order to measure the hardness of a contact object, like a human finger, it is required to apply a small force/deformation to the object by a tactile sensor. Then, the applied force and resulting deformation should be recorded at certain points to enable the relative hardness measurement. In this work, force/deformation data for a contact object is recorded at certain points by the proposed optical sensor. Recorded data is used to measure the relative hardness of soft objects. Based on the proposed design, an experimental setup was developed and experimental tests were performed to measure the relative hardness of elastomeric materials. Experimental results verify the ability of the proposed optical sensor to measure the relative hardness of elastomeric samples.

  12. Vector-algebra approach to extract Denavit-Hartenberg parameters of assembled robot arms

    NASA Technical Reports Server (NTRS)

    Barker, L. K.

    1983-01-01

    The Denavit-Hartenberg parameters characterize the joint axis systems in a robot arm and, naturally, appear in the transformation matrices from one joint axis system to another. These parameters are needed in the control of robot arms and in the passage of sensor information along the arm. This paper presents a vector algebra method to determine these parameters for any assembled robot arm. The idea is to measure the location of the robot hand (or extension) for different joint angles and then use these measurements to calculate the parameters.

  13. Adding navigation, artificial audition and vital sign monitoring capabilities to a telepresence mobile robot for remote home care applications.

    PubMed

    Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois

    2017-07-01

    A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.

  14. Upper Torso Control for HOAP-2 Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Sandoval, Steven P.

    2005-01-01

    Humanoid robots have similar physical builds and motion patterns as humans. Not only does this provide a suitable operating environment for the humanoid but it also opens up many research doors on how humans function. The overall objective is replacing humans operating in unsafe environments. A first target application is assembly of structures for future lunar-planetary bases. The initial development platform is a Fujitsu HOAP-2 humanoid robot. The goal for the project is to demonstrate the capability of a HOAP-2 to autonomously construct a cubic frame using provided tubes and joints. This task will require the robot to identify several items, pick them up, transport them to the build location, then properly assemble the structure. The ability to grasp and assemble the pieces will require improved motor control and the addition of tactile feedback sensors. In recent years, learning-based control is becoming more and more popular; for implementing this method we will be using the Adaptive Neural Fuzzy Inference System (ANFIS). When using neural networks for control, no complex models of the system must be constructed in advance-only input/output relationships are required to model the system.

  15. Designing minimal space telerobotics systems for maximum performance

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Long, Mark K.; Steele, Robert D.

    1992-01-01

    The design of the remote site of a local-remote telerobot control system is described which addresses the constraints of limited computational power available at the remote site control system while providing a large range of control capabilities. The Modular Telerobot Task Execution System (MOTES) provides supervised autonomous control, shared control and teleoperation for a redundant manipulator. The system is capable of nominal task execution as well as monitoring and reflex motion. The MOTES system is minimized while providing a large capability by limiting its functionality to only that which is necessary at the remote site and by utilizing a unified multi-sensor based impedance control scheme. A command interpreter similar to one used on robotic spacecraft is used to interpret commands received from the local site. The system is written in Ada and runs in a VME environment on 68020 processors and initially controls a Robotics Research K1207 7 degree of freedom manipulator.

  16. Multidigit force control during unconstrained grasping in response to object perturbations

    PubMed Central

    Haschke, Robert; Ritter, Helge; Santello, Marco; Ernst, Marc O.

    2017-01-01

    Because of the complex anatomy of the human hand, in the absence of external constraints, a large number of postures and force combinations can be used to attain a stable grasp. Motor synergies provide a viable strategy to solve this problem of motor redundancy. In this study, we exploited the technical advantages of an innovative sensorized object to study unconstrained hand grasping within the theoretical framework of motor synergies. Participants were required to grasp, lift, and hold the sensorized object. During the holding phase, we repetitively applied external disturbance forces and torques and recorded the spatiotemporal distribution of grip forces produced by each digit. We found that the time to reach the maximum grip force during each perturbation was roughly equal across fingers, consistent with a synchronous, synergistic stiffening across digits. We further evaluated this hypothesis by comparing the force distribution of human grasping vs. robotic grasping, where the control strategy was set by the experimenter. We controlled the global hand stiffness of the robotic hand and found that this control algorithm produced a force pattern qualitatively similar to human grasping performance. Our results suggest that the nervous system uses a default whole hand synergistic control to maintain a stable grasp regardless of the number of digits involved in the task, their position on the objects, and the type and frequency of external perturbations. NEW & NOTEWORTHY We studied hand grasping using a sensorized object allowing unconstrained finger placement. During object perturbation, the time to reach the peak force was roughly equal across fingers, consistently with a synergistic stiffening across fingers. Force distribution of a robotic grasping hand, where the control algorithm is based on global hand stiffness, was qualitatively similar to human grasping. This suggests that the central nervous system uses a default whole hand synergistic control to maintain a stable grasp. PMID:28228582

  17. Soft Ultrathin Electronics Innervated Adaptive Fully Soft Robots.

    PubMed

    Wang, Chengjun; Sim, Kyoseung; Chen, Jin; Kim, Hojin; Rao, Zhoulyu; Li, Yuhang; Chen, Weiqiu; Song, Jizhou; Verduzco, Rafael; Yu, Cunjiang

    2018-03-01

    Soft robots outperform the conventional hard robots on significantly enhanced safety, adaptability, and complex motions. The development of fully soft robots, especially fully from smart soft materials to mimic soft animals, is still nascent. In addition, to date, existing soft robots cannot adapt themselves to the surrounding environment, i.e., sensing and adaptive motion or response, like animals. Here, compliant ultrathin sensing and actuating electronics innervated fully soft robots that can sense the environment and perform soft bodied crawling adaptively, mimicking an inchworm, are reported. The soft robots are constructed with actuators of open-mesh shaped ultrathin deformable heaters, sensors of single-crystal Si optoelectronic photodetectors, and thermally responsive artificial muscle of carbon-black-doped liquid-crystal elastomer (LCE-CB) nanocomposite. The results demonstrate that adaptive crawling locomotion can be realized through the conjugation of sensing and actuation, where the sensors sense the environment and actuators respond correspondingly to control the locomotion autonomously through regulating the deformation of LCE-CB bimorphs and the locomotion of the robots. The strategy of innervating soft sensing and actuating electronics with artificial muscles paves the way for the development of smart autonomous soft robots. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. RCTS: A flexible environment for sensor integration and control of robot systems; the distributed processing approach

    NASA Technical Reports Server (NTRS)

    Allard, R.; Mack, B.; Bayoumi, M. M.

    1989-01-01

    Most robot systems lack a suitable hardware and software environment for the efficient research of new control and sensing schemes. Typically, engineers and researchers need to be experts in control, sensing, programming, communication and robotics in order to implement, integrate and test new ideas in a robot system. In order to reduce this time, the Robot Controller Test Station (RCTS) has been developed. It uses a modular hardware and software architecture allowing easy physical and functional reconfiguration of a robot. This is accomplished by emphasizing four major design goals: flexibility, portability, ease of use, and ease of modification. An enhanced distributed processing version of RCTS is described. It features an expanded and more flexible communication system design. Distributed processing results in the availability of more local computing power and retains the low cost of microprocessors. A large number of possible communication, control and sensing schemes can therefore be easily introduced and tested, using the same basic software structure.

  19. Wireless Self-powered Visual and NDE Robotic Inspection System for Live Gas Distribution Mains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Susan Burkett; Hagen Schempf

    2006-01-31

    Carnegie Mellon University (CMU) under contract from Department of Energy/National Energy Technology Laboratory (DoE/NETL) and co-funding from the Northeast Gas Association (NGA), has completed the overall system design of the next-generation Explorer-II (X-II) live gas main NDE and visual inspection robot platform. The design is based on the Explorer-I prototype which was built and field-tested under a prior (also DoE- and NGA co-funded) program, and served as the validation that self-powered robots under wireless control could access and navigate live natural gas distribution mains. The X-II system design ({approx}8 ft. and 66 lbs.) was heavily based on the X-I design,more » yet was substantially expanded to allow the addition of NDE sensor systems (while retaining its visual inspection capability), making it a modular system, and expanding its ability to operate at pressures up to 750 psig (high-pressure and unpiggable steel-pipe distribution mains). A new electronics architecture and on-board software kernel were added to again improve system performance. A locating sonde system was integrated to allow for absolute position-referencing during inspection (coupled with external differential GPS) and emergency-locating. The power system was upgraded to utilize lithium-based battery-cells for an increase in mission-time. The system architecture now relies on a dual set of end camera-modules to house the 32-bit processors (Single-Board Computer or SBC) as well as the imaging and wireless (off-board) and CAN-based (on-board) communication hardware and software systems (as well as the sonde-coil and -electronics). The drive-module (2 ea.) are still responsible for bracing (and centering) to drive in push/pull fashion the robot train into and through the pipes and obstacles. The steering modules and their arrangement, still allow the robot to configure itself to perform any-angle (up to 90 deg) turns in any orientation (incl. vertical), and enable the live launching and recovery of the system using custom fittings and a (to be developed) launch-chamber/-tube. The battery modules are used to power the system, by providing power to the robot's bus. The support modules perform the functions of centration for the rest of the train as well as odometry pickups using incremental encoding schemes. The electronics architecture is based on a distributed (8-bit) microprocessor architecture (at least 1 in ea. module) communicating to a (one of two) 32-bit SBC, which manages all video-processing, posture and motion control as well as CAN and wireless communications. The operator controls the entire system from an off-board (laptop) controller, which is in constant wireless communication with the robot train in the pipe. The sensor modules collect data and forward it to the robot operator computer (via the CAN-wireless communications chain), who then transfers it to a dedicated NDE data-storage and post-processing computer for further (real-time or off-line) analysis. CMU has fully designed every module in terms of the mechanical, electrical and software elements (architecture only). Substantial effort has gone into pre-prototyping to uncover mechanical, electrical and software issues for critical elements of the design. Design requirements for sensor-providers were also detailed and finalized and provided to them for inclusion in their designs. CMU is expecting to start 2006 with a detailed design effort for both mechanical and electrical components, followed by procurement and fabrication efforts in late winter/spring 2006. The assembly and integration efforts will occupy all of the spring and summer of 2006. Software development will also be a major effort in 2006, and will result in porting and debugging of code on the module- and train-levels in late summer and Fall of 2006. Final pipe mock-up testing is expected in late fall and early winter 2006 with an acceptance demonstration of the robot train (with a sensor-module mock-up) planned to DoE/NGA towards the end of 2006.« less

  20. Progress in the development of shallow-water mapping systems

    USGS Publications Warehouse

    Bergeron, E.; Worley, C.R.; O'Brien, T.

    2007-01-01

    The USGS (US Geological Survey) Coastal and Marine Geology has deployed an advance autonomous shallow-draft robotic vehicle, Iris, for shallow-water mapping in Apalachicola Bay, Florida. The vehicle incorporates a side scan sonar system, seismic-reflection profiler, single-beam echosounder, and global positioning system (GPS) navigation. It is equipped with an onboard microprocessor-based motor controller, delivering signals for speed and steering to hull-mounted brushless direct-current thrusters. An onboard motion sensor in the Sea Robotics vehicle control system enclosure has been integrated in the vehicle to measure the vehicle heave, pitch, roll, and heading. Three water-tight enclosures are mounted along the vehicle axis for the Edgetech computer and electronics system including the Sea Robotics computer, a control and wireless communications system, and a Thales ZXW real-time kinematic (RTK) GPS receiver. The vehicle has resulted in producing high-quality seismic reflection and side scan sonar data, which will help in developing the baseline oyster habitat maps.

  1. Toward autonomous avian-inspired grasping for micro aerial vehicles.

    PubMed

    Thomas, Justin; Loianno, Giuseppe; Polin, Joseph; Sreenath, Koushil; Kumar, Vijay

    2014-06-01

    Micro aerial vehicles, particularly quadrotors, have been used in a wide range of applications. However, the literature on aerial manipulation and grasping is limited and the work is based on quasi-static models. In this paper, we draw inspiration from agile, fast-moving birds such as raptors, that are able to capture moving prey on the ground or in water, and develop similar capabilities for quadrotors. We address dynamic grasping, an approach to prehensile grasping in which the dynamics of the robot and its gripper are significant and must be explicitly modeled and controlled for successful execution. Dynamic grasping is relevant for fast pick-and-place operations, transportation and delivery of objects, and placing or retrieving sensors. We show how this capability can be realized (a) using a motion capture system and (b) without external sensors relying only on onboard sensors. In both cases we describe the dynamic model, and trajectory planning and control algorithms. In particular, we present a methodology for flying and grasping a cylindrical object using feedback from a monocular camera and an inertial measurement unit onboard the aerial robot. This is accomplished by mapping the dynamics of the quadrotor to a level virtual image plane, which in turn enables dynamically-feasible trajectory planning for image features in the image space, and a vision-based controller with guaranteed convergence properties. We also present experimental results obtained with a quadrotor equipped with an articulated gripper to illustrate both approaches.

  2. The prototype cameras for trans-Neptunian automatic occultation survey

    NASA Astrophysics Data System (ADS)

    Wang, Shiang-Yu; Ling, Hung-Hsu; Hu, Yen-Sang; Geary, John C.; Chang, Yin-Chang; Chen, Hsin-Yo; Amato, Stephen M.; Huang, Pin-Jie; Pratlong, Jerome; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy; Jorden, Paul

    2016-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by TransNeptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degrees diameter field of view of the 1.3m telescope with 10 mosaic 4.5k×2k CMOS sensors. The new CMOS sensor (CIS 113) has a back illumination thinned structure and high sensitivity to provide similar performance to that of the back-illumination thinned CCDs. Due to the requirements of high performance and high speed, the development of the new CMOS sensor is still in progress. Before the science arrays are delivered, a prototype camera is developed to help on the commissioning of the robotic telescope system. The prototype camera uses the small format e2v CIS 107 device but with the same dewar and also the similar control electronics as the TAOS II science camera. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K as the science array by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. One FPGA is needed to control and process the signal from a CMOS sensor for 20Hz region of interests (ROI) readout.

  3. Two-Armed, Mobile, Sensate Research Robot

    NASA Technical Reports Server (NTRS)

    Engelberger, J. F.; Roberts, W. Nelson; Ryan, David J.; Silverthorne, Andrew

    2004-01-01

    The Anthropomorphic Robotic Testbed (ART) is an experimental prototype of a partly anthropomorphic, humanoid-size, mobile robot. The basic ART design concept provides for a combination of two-armed coordination, tactility, stereoscopic vision, mobility with navigation and avoidance of obstacles, and natural-language communication, so that the ART could emulate humans in many activities. The ART could be developed into a variety of highly capable robotic assistants for general or specific applications. There is especially great potential for the development of ART-based robots as substitutes for live-in health-care aides for home-bound persons who are aged, infirm, or physically handicapped; these robots could greatly reduce the cost of home health care and extend the term of independent living. The ART is a fully autonomous and untethered system. It includes a mobile base on which is mounted an extensible torso topped by a head, shoulders, and two arms. All subsystems of the ART are powered by a rechargeable, removable battery pack. The mobile base is a differentially- driven, nonholonomic vehicle capable of a speed >1 m/s and can handle a payload >100 kg. The base can be controlled manually, in forward/backward and/or simultaneous rotational motion, by use of a joystick. Alternatively, the motion of the base can be controlled autonomously by an onboard navigational computer. By retraction or extension of the torso, the head height of the ART can be adjusted from 5 ft (1.5 m) to 6 1/2 ft (2 m), so that the arms can reach either the floor or high shelves, or some ceilings. The arms are symmetrical. Each arm (including the wrist) has a total of six rotary axes like those of the human shoulder, elbow, and wrist joints. The arms are actuated by electric motors in combination with brakes and gas-spring assists on the shoulder and elbow joints. The arms are operated under closed-loop digital control. A receptacle for an end effector is mounted on the tip of the wrist and contains a force-and-torque sensor that provides feedback for force (compliance) control of the arm. The end effector could be a tool or a robot hand, depending on the application.

  4. Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System

    NASA Astrophysics Data System (ADS)

    Oh, Sung J.; Hall, Ernest L.

    1987-01-01

    Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.

  5. Inverse kinematic-based robot control

    NASA Technical Reports Server (NTRS)

    Wolovich, W. A.; Flueckiger, K. F.

    1987-01-01

    A fundamental problem which must be resolved in virtually all non-trivial robotic operations is the well-known inverse kinematic question. More specifically, most of the tasks which robots are called upon to perform are specified in Cartesian (x,y,z) space, such as simple tracking along one or more straight line paths or following a specified surfacer with compliant force sensors and/or visual feedback. In all cases, control is actually implemented through coordinated motion of the various links which comprise the manipulator; i.e., in link space. As a consequence, the control computer of every sophisticated anthropomorphic robot must contain provisions for solving the inverse kinematic problem which, in the case of simple, non-redundant position control, involves the determination of the first three link angles, theta sub 1, theta sub 2, and theta sub 3, which produce a desired wrist origin position P sub xw, P sub yw, and P sub zw at the end of link 3 relative to some fixed base frame. Researchers outline a new inverse kinematic solution and demonstrate its potential via some recent computer simulations. They also compare it to current inverse kinematic methods and outline some of the remaining problems which will be addressed in order to render it fully operational. Also discussed are a number of practical consequences of this technique beyond its obvious use in solving the inverse kinematic question.

  6. TU-AB-201-03: A Robot for the Automated Delivery of An Electromagnetic Tracking Sensor for the Localization of Brachytherapy Catheters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Don, S; Cormack, R; Viswanathan, A

    Purpose: To present a programmable robotic system for the accurate and fast deployment of an electromagnetic (EM) sensor for brachytherapy catheter localization. Methods: A robotic system for deployment of an EM sensor was designed and built. The system was programmed to increment the sensor position at specified time and space intervals. Sensor delivery accuracy was measured in a phantom using the localization of the EM sensor and tested in different environmental conditions. Accuracy was tested by measuring the distance between the physical locations reached by the sensor (measured by the EM tracker) and the intended programmed locations. Results: The systemmore » consisted of a stepper motor connected to drive wheels (that grip the cable to move the sensor) and a series of guides to connect to a brachytherapy transfer tube, all controlled by a programmable Arduino microprocessor. The total cost for parts was <$300. The positional accuracy of the sensor location was within 1 mm of the expected position provided by the motorized guide system. Acquisition speed to localize a brachytherapy catheter with 20 cm of active length was 10 seconds. The current design showed some cable slip and warping depending on environment temperature. Conclusion: The use of EM tracking for the localization of brachytherapy catheters has been previously demonstrated. Efficient data acquisition and artifact reduction requires fast and accurate deployment of an EM sensor in consistent, repeatable patterns, which cannot practically be achieved manually. The design of an inexpensive, programmable robot allowing for the precise deployment of stepping patterns was presented, and a prototype was built. Further engineering is necessary to ensure that the device provides efficient independent localization of brachytherapy catheters. This research was funded by the Kaye Family Award.« less

  7. Flexible robotics with electromagnetic tracking improves safety and efficiency during in vitro endovascular navigation.

    PubMed

    Schwein, Adeline; Kramer, Ben; Chinnadurai, Ponraj; Walker, Sean; O'Malley, Marcia; Lumsden, Alan; Bismuth, Jean

    2017-02-01

    One limitation of the use of robotic catheters is the lack of real-time three-dimensional (3D) localization and position updating: they are still navigated based on two-dimensional (2D) X-ray fluoroscopic projection images. Our goal was to evaluate whether incorporating an electromagnetic (EM) sensor on a robotic catheter tip could improve endovascular navigation. Six users were tasked to navigate using a robotic catheter with incorporated EM sensors in an aortic aneurysm phantom. All users cannulated two anatomic targets (left renal artery and posterior "gate") using four visualization modes: (1) standard fluoroscopy mode (control), (2) 2D fluoroscopy mode showing real-time virtual catheter orientation from EM tracking, (3) 3D model of the phantom with anteroposterior and endoluminal view, and (4) 3D model with anteroposterior and lateral view. Standard X-ray fluoroscopy was always available. Cannulation and fluoroscopy times were noted for every mode. 3D positions of the EM tip sensor were recorded at 4 Hz to establish kinematic metrics. The EM sensor-incorporated catheter navigated as expected according to all users. The success rate for cannulation was 100%. For the posterior gate target, mean cannulation times in minutes:seconds were 8:12, 4:19, 4:29, and 3:09, respectively, for modes 1, 2, 3 and 4 (P = .013), and mean fluoroscopy times were 274, 20, 29, and 2 seconds, respectively (P = .001). 3D path lengths, spectral arc length, root mean dimensionless jerk, and number of submovements were significantly improved when EM tracking was used (P < .05), showing higher quality of catheter movement with EM navigation. The EM tracked robotic catheter allowed better real-time 3D orientation, facilitating navigation, with a reduction in cannulation and fluoroscopy times and improvement of motion consistency and efficiency. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  8. Real-time control for manufacturing space shuttle main engines: Work in progress

    NASA Technical Reports Server (NTRS)

    Ruokangas, Corinne C.

    1988-01-01

    During the manufacture of space-based assemblies such as Space Shuttle Main Engines, flexibility is required due to the high-cost and low-volume nature of the end products. Various systems have been developed pursuing the goal of adaptive, flexible manufacturing for several space applications, including an Advanced Robotic Welding System for the manufacture of complex components of the Space Shuttle Main Engines. The Advanced Robotic Welding System (AROWS) is an on-going joint effort, funded by NASA, between NASA/Marshall Space Flight Center, and two divisions of Rockwell International: Rocketdyne and the Science Center. AROWS includes two levels of flexible control of both motion and process parameters: Off-line programming using both geometric and weld-process data bases, and real-time control incorporating multiple sensors during weld execution. Both control systems were implemented using conventional hardware and software architectures. The feasibility of enhancing the real-time control system using the problem-solving architecture of Schemer is investigated and described.

  9. Evolution and advanced technology. [of Flight Telerobotic Servicer

    NASA Technical Reports Server (NTRS)

    Ollendorf, Stanford; Pennington, Jack E.; Hansen, Bert, III

    1990-01-01

    The NASREM architecture with its standard interfaces permits development and evolution of the Flight Telerobotic Servicer to greater autonomy. Technologies in control strategies for an arm with seven DOF, including a safety system containing skin sensors for obstacle avoidance, are being developed. Planning and robotic execution software includes symbolic task planning, world model data bases, and path planning algorithms. Research over the last five years has led to the development of laser scanning and ranging systems, which use coherent semiconductor laser diodes for short range sensing. The possibility of using a robot to autonomously assemble space structures is being investigated. A control framework compatible with NASREM is being developed that allows direct global control of the manipulator. Researchers are developing systems that permit an operator to quickly reconfigure the telerobot to do new tasks safely.

  10. A Survey of Robotic Technology.

    DTIC Science & Technology

    1983-07-01

    developed the following definition of a robot: A robot is a reprogrammable multifunctional manipulator designed to move material, parts, tools, or specialized...subroutines subroutines commands to specific actuators, computations based on sensor data, etc. For instance, the job might be to assemble an automobile ...the set-up developed at Draper Labs to enable a robot to assemble an automobile alternator. The assembly operation is impressive to watch. The number

  11. Ultrafast Dynamic Pressure Sensors Based on Graphene Hybrid Structure.

    PubMed

    Liu, Shanbiao; Wu, Xing; Zhang, Dongdong; Guo, Congwei; Wang, Peng; Hu, Weida; Li, Xinming; Zhou, Xiaofeng; Xu, Hejun; Luo, Chen; Zhang, Jian; Chu, Junhao

    2017-07-19

    Mechanical flexible electronic skin has been focused on sensing various physical parameters, such as pressure and temperature. The studies of material design and array-accessible devices are the building blocks of strain sensors for subtle pressure sensing. Here, we report a new and facile preparation of a graphene hybrid structure with an ultrafast dynamic pressure response. Graphene oxide nanosheets are used as a surfactant to prevent graphene restacking in aqueous solution. This graphene hybrid structure exhibits a frequency-independent pressure resistive sensing property. Exceeding natural skin, such pressure sensors, can provide transient responses from static up to 10 000 Hz dynamic frequencies. Integrated by the controlling system, the array-accessible sensors can manipulate a robot arm and self-rectify the temperature of a heating blanket. This may pave a path toward the future application of graphene-based wearable electronics.

  12. Long-Term Simultaneous Localization and Mapping in Dynamic Environments

    DTIC Science & Technology

    2015-01-01

    core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the...and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot???s sensory...distributed stochastic neighbor embedding x ABSTRACT One of the core competencies required for autonomous mobile robotics is the ability to use sensors

  13. Coordinated perception by teams of aerial and ground robots

    NASA Astrophysics Data System (ADS)

    Grocholsky, Benjamin P.; Swaminathan, Rahul; Kumar, Vijay; Taylor, Camillo J.; Pappas, George J.

    2004-12-01

    Air and ground vehicles exhibit complementary capabilities and characteristics as robotic sensor platforms. Fixed wing aircraft offer broad field of view and rapid coverage of search areas. However, minimum operating airspeed and altitude limits, combined with attitude uncertainty, place a lower limit on their ability to detect and localize ground features. Ground vehicles on the other hand offer high resolution sensing over relatively short ranges with the disadvantage of slow coverage. This paper presents a decentralized architecture and solution methodology for seamlessly realizing the collaborative potential of air and ground robotic sensor platforms. We provide a framework based on an established approach to the underlying sensor fusion problem. This provides transparent integration of information from heterogeneous sources. An information-theoretic utility measure captures the task objective and robot inter-dependencies. A simple distributed solution mechanism is employed to determine team member sensing trajectories subject to the constraints of individual vehicle and sensor sub-systems. The architecture is applied to a mission involving searching for and localizing an unknown number of targets in an user specified search area. Results for a team of two fixed wing UAVs and two all terrain UGVs equipped with vision sensors are presented.

  14. Calibration of a flexible measurement system based on industrial articulated robot and structured light sensor

    NASA Astrophysics Data System (ADS)

    Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping

    2017-05-01

    To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.

  15. The Performance Analysis of AN Indoor Mobile Mapping System with Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Tsai, G. J.; Chiang, K. W.; Chu, C. H.; Chen, Y. L.; El-Sheimy, N.; Habib, A.

    2015-08-01

    Over the years, Mobile Mapping Systems (MMSs) have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM). The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG) performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU), the Kinect RGB-D sensor and light detection, ranging (LIDAR) and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.

  16. Planetary micro-rover operations on Mars using a Bayesian framework for inference and control

    NASA Astrophysics Data System (ADS)

    Post, Mark A.; Li, Junquan; Quine, Brendan M.

    2016-03-01

    With the recent progress toward the application of commercially-available hardware to small-scale space missions, it is now becoming feasible for groups of small, efficient robots based on low-power embedded hardware to perform simple tasks on other planets in the place of large-scale, heavy and expensive robots. In this paper, we describe design and programming of the Beaver micro-rover developed for Northern Light, a Canadian initiative to send a small lander and rover to Mars to study the Martian surface and subsurface. For a small, hardware-limited rover to handle an uncertain and mostly unknown environment without constant management by human operators, we use a Bayesian network of discrete random variables as an abstraction of expert knowledge about the rover and its environment, and inference operations for control. A framework for efficient construction and inference into a Bayesian network using only the C language and fixed-point mathematics on embedded hardware has been developed for the Beaver to make intelligent decisions with minimal sensor data. We study the performance of the Beaver as it probabilistically maps a simple outdoor environment with sensor models that include uncertainty. Results indicate that the Beaver and other small and simple robotic platforms can make use of a Bayesian network to make intelligent decisions in uncertain planetary environments.

  17. Command Recognition of Robot with Low Dimension Whole-Body Haptic Sensor

    NASA Astrophysics Data System (ADS)

    Ito, Tatsuya; Tsuji, Toshiaki

    The authors have developed “haptic armor”, a whole-body haptic sensor that has an ability to estimate contact position. Although it is developed for safety assurance of robots in human environment, it can also be used as an interface. This paper proposes a command recognition method based on finger trace information. This paper also discusses some technical issues for improving recognition accuracy of this system.

  18. Learning classifier systems for single and multiple mobile robots in unstructured environments

    NASA Astrophysics Data System (ADS)

    Bay, John S.

    1995-12-01

    The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.

  19. Dual-Arm Generalized Compliant Motion With Shared Control

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1994-01-01

    Dual-Arm Generalized Compliant Motion (DAGCM) primitive computer program implementing improved unified control scheme for two manipulator arms cooperating in task in which both grasp same object. Provides capabilities for autonomous, teleoperation, and shared control of two robot arms. Unifies cooperative dual-arm control with multi-sensor-based task control and makes complete task-control capability available to higher-level task-planning computer system via large set of input parameters used to describe desired force and position trajectories followed by manipulator arms. Some concepts discussed in "A Generalized-Compliant-Motion Primitive" (NPO-18134).

  20. Robotics-assisted mass spectrometry assay platform enabled by open-source electronics.

    PubMed

    Chiu, Shih-Hao; Urban, Pawel L

    2015-02-15

    Mass spectrometry (MS) is an important analytical technique with numerous applications in clinical analysis, biochemistry, environmental analysis, geology and physics. Its success builds on the ability of MS to determine molecular weights of analytes, and elucidate their structures. However, sample handling prior to MS requires a lot of attention and labor. In this work we were aiming to automate processing samples for MS so that analyses could be conducted without much supervision of experienced analysts. The goal of this study was to develop a robotics and information technology-oriented platform that could control the whole analysis process including sample delivery, reaction-based assay, data acquisition, and interaction with the analyst. The proposed platform incorporates a robotic arm for handling sample vials delivered to the laboratory, and several auxiliary devices which facilitate and secure the analysis process. They include: multi-relay board, infrared sensors, photo-interrupters, gyroscopes, force sensors, fingerprint scanner, barcode scanner, touch screen panel, and internet interface. The control of all the building blocks is achieved through implementation of open-source electronics (Arduino), and enabled by custom-written programs in C language. The advantages of the proposed system include: low cost, simplicity, small size, as well as facile automation of sample delivery and processing without the intervention of the analyst. It is envisaged that this simple robotic system may be the forerunner of automated laboratories dedicated to mass spectrometric analysis of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

Top