Sample records for obstacle detection robotics

  1. A Compact Magnetic Field-Based Obstacle Detection and Avoidance System for Miniature Spherical Robots.

    PubMed

    Wu, Fang; Vibhute, Akash; Soh, Gim Song; Wood, Kristin L; Foong, Shaohui

    2017-05-28

    Due to their efficient locomotion and natural tolerance to hazardous environments, spherical robots have wide applications in security surveillance, exploration of unknown territory and emergency response. Numerous studies have been conducted on the driving mechanism, motion planning and trajectory tracking methods of spherical robots, yet very limited studies have been conducted regarding the obstacle avoidance capability of spherical robots. Most of the existing spherical robots rely on the "hit and run" technique, which has been argued to be a reasonable strategy because spherical robots have an inherent ability to recover from collisions. Without protruding components, they will not become stuck and can simply roll back after running into bstacles. However, for small scale spherical robots that contain sensitive surveillance sensors and cannot afford to utilize heavy protective shells, the absence of obstacle avoidance solutions would leave the robot at the mercy of potentially dangerous obstacles. In this paper, a compact magnetic field-based obstacle detection and avoidance system has been developed for miniature spherical robots. It utilizes a passive magnetic field so that the system is both compact and power efficient. The proposed system can detect not only the presence, but also the approaching direction of a ferromagnetic obstacle, therefore, an intelligent avoidance behavior can be generated by adapting the trajectory tracking method with the detection information. Design optimization is conducted to enhance the obstacle detection performance and detailed avoidance strategies are devised. Experimental results are also presented for validation purposes.

  2. A soft robot capable of 2D mobility and self-sensing for obstacle detection and avoidance

    NASA Astrophysics Data System (ADS)

    Qin, Lei; Tang, Yucheng; Gupta, Ujjaval; Zhu, Jian

    2018-04-01

    Soft robots have shown great potential for surveillance applications due to their interesting attributes including inherent flexibility, extreme adaptability, and excellent ability to move in confined spaces. High mobility combined with the sensing systems that can detect obstacles plays a significant role in performing surveillance tasks. Extensive studies have been conducted on movement mechanisms of traditional hard-bodied robots to increase their mobility. However, there are limited efforts in the literature to explore the mobility of soft robots. In addition, little attempt has been made to study the obstacle-detection capability of a soft mobile robot. In this paper, we develop a soft mobile robot capable of high mobility and self-sensing for obstacle detection and avoidance. This robot, consisting of a dielectric elastomer actuator as the robot body and four electroadhesion actuators as the robot feet, can generate 2D mobility, i.e. translations and turning in a 2D plane, by programming the actuation sequence of the robot body and feet. Furthermore, we develop a self-sensing method which models the robot body as a deformable capacitor. By measuring the real-time capacitance of the robot body, the robot can detect an obstacle when the peak capacitance drops suddenly. This sensing method utilizes the robot body itself instead of external sensors to achieve detection of obstacles, which greatly reduces the weight and complexity of the robot system. The 2D mobility and self-sensing capability ensure the success of obstacle detection and avoidance, which paves the way for the development of lightweight and intelligent soft mobile robots.

  3. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-03-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  4. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    PubMed Central

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  5. Fast obstacle detection based on multi-sensor information fusion

    NASA Astrophysics Data System (ADS)

    Lu, Linli; Ying, Jie

    2014-11-01

    Obstacle detection is one of the key problems in areas such as driving assistance and mobile robot navigation, which cannot meet the actual demand by using a single sensor. A method is proposed to realize the real-time access to the information of the obstacle in front of the robot and calculating the real size of the obstacle area according to the mechanism of the triangle similarity in process of imaging by fusing datum from a camera and an ultrasonic sensor, which supports the local path planning decision. In the part of image analyzing, the obstacle detection region is limited according to complementary principle. We chose ultrasonic detection range as the region for obstacle detection when the obstacle is relatively near the robot, and the travelling road area in front of the robot is the region for a relatively-long-distance detection. The obstacle detection algorithm is adapted from a powerful background subtraction algorithm ViBe: Visual Background Extractor. We extracted an obstacle free region in front of the robot in the initial frame, this region provided a reference sample set of gray scale value for obstacle detection. Experiments of detecting different obstacles at different distances respectively, give the accuracy of the obstacle detection and the error percentage between the calculated size and the actual size of the detected obstacle. Experimental results show that the detection scheme can effectively detect obstacles in front of the robot and provide size of the obstacle with relatively high dimensional accuracy.

  6. Integrating obstacle avoidance, global path planning, visual cue detection, and landmark triangulation in a mobile robot

    NASA Astrophysics Data System (ADS)

    Kortenkamp, David; Huber, Marcus J.; Congdon, Clare B.; Huffman, Scott B.; Bidlack, Clint R.; Cohen, Charles J.; Koss, Frank V.; Raschke, Ulrich; Weymouth, Terry E.

    1993-05-01

    This paper describes the design and implementation of an integrated system for combining obstacle avoidance, path planning, landmark detection and position triangulation. Such an integrated system allows the robot to move from place to place in an environment, avoiding obstacles and planning its way out of traps, while maintaining its position and orientation using distinctive landmarks. The task the robot performs is to search a 22 m X 22 m arena for 10 distinctive objects, visiting each object in turn. This same task was recently performed by a dozen different robots at a competition in which the robot described in this paper finished first.

  7. Research on robot mobile obstacle avoidance control based on visual information

    NASA Astrophysics Data System (ADS)

    Jin, Jiang

    2018-03-01

    Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.

  8. Automatic detection and classification of obstacles with applications in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.

    2016-04-01

    Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.

  9. Detecting Negative Obstacles by Use of Radar

    NASA Technical Reports Server (NTRS)

    Mittskus, Anthony; Lux, James

    2006-01-01

    Robotic land vehicles would be equipped with small radar systems to detect negative obstacles, according to a proposal. The term "negative obstacles" denotes holes, ditches, and any other terrain features characterized by abrupt steep downslopes that could be hazardous for vehicles. Video cameras and other optically based obstacle-avoidance sensors now installed on some robotic vehicles cannot detect obstacles under adverse lighting conditions. Even under favorable lighting conditions, they cannot detect negative obstacles. A radar system according to the proposal would be of the frequency-modulation/ continuous-wave (FM/CW) type. It would be installed on a vehicle, facing forward, possibly with a downward slant of the main lobe(s) of the radar beam(s) (see figure). It would utilize one or more wavelength(s) of the order of centimeters. Because such wavelengths are comparable to the characteristic dimensions of terrain features associated with negative hazards, a significant amount of diffraction would occur at such features. In effect, the diffraction would afford a limited ability to see corners and to see around corners. Hence, the system might utilize diffraction to detect corners associated with negative obstacles. At the time of reporting the information for this article, preliminary analyses of diffraction at simple negative obstacles had been performed, but an explicit description of how the system would utilize diffraction was not available.

  10. Biologically-inspired adaptive obstacle negotiation behavior of hexapod robots

    PubMed Central

    Goldschmidt, Dennis; Wörgötter, Florentin; Manoonpong, Poramate

    2014-01-01

    Neurobiological studies have shown that insects are able to adapt leg movements and posture for obstacle negotiation in changing environments. Moreover, the distance to an obstacle where an insect begins to climb is found to be a major parameter for successful obstacle negotiation. Inspired by these findings, we present an adaptive neural control mechanism for obstacle negotiation behavior in hexapod robots. It combines locomotion control, backbone joint control, local leg reflexes, and neural learning. While the first three components generate locomotion including walking and climbing, the neural learning mechanism allows the robot to adapt its behavior for obstacle negotiation with respect to changing conditions, e.g., variable obstacle heights and different walking gaits. By successfully learning the association of an early, predictive signal (conditioned stimulus, CS) and a late, reflex signal (unconditioned stimulus, UCS), both provided by ultrasonic sensors at the front of the robot, the robot can autonomously find an appropriate distance from an obstacle to initiate climbing. The adaptive neural control was developed and tested first on a physical robot simulation, and was then successfully transferred to a real hexapod robot, called AMOS II. The results show that the robot can efficiently negotiate obstacles with a height up to 85% of the robot's leg length in simulation and 75% in a real environment. PMID:24523694

  11. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  12. Obstacle avoidance for redundant robots using configuration control

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor); Colbaugh, Richard D. (Inventor); Glass, Kristin L. (Inventor)

    1992-01-01

    A redundant robot control scheme is provided for avoiding obstacles in a workspace during the motion of an end effector along a preselected trajectory by stopping motion of the critical point on the robot closest to the obstacle when the distance between is reduced to a predetermined sphere of influence surrounding the obstacle. Algorithms are provided for conveniently determining the critical point and critical distance.

  13. Tracked robot controllers for climbing obstacles autonomously

    NASA Astrophysics Data System (ADS)

    Vincent, Isabelle

    2009-05-01

    Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.

  14. The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Han, Baoling

    2016-11-01

    The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.

  15. A switching formation strategy for obstacle avoidance of a multi-robot system based on robot priority model.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2015-05-01

    This paper describes a switching formation strategy for multi-robots with velocity constraints to avoid and cross obstacles. In the strategy, a leader robot plans a safe path using the geometric obstacle avoidance control method (GOACM). By calculating new desired distances and bearing angles with the leader robot, the follower robots switch into a safe formation. With considering collision avoidance, a novel robot priority model, based on the desired distance and bearing angle between the leader and follower robots, is designed during the obstacle avoidance process. The adaptive tracking control algorithm guarantees that the trajectory and velocity tracking errors converge to zero. To demonstrate the validity of the proposed methods, simulation and experiment results present that multi-robots effectively form and switch formation avoiding obstacles without collisions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Obstacle avoidance handling and mixed integer predictive control for space robots

    NASA Astrophysics Data System (ADS)

    Zong, Lijun; Luo, Jianjun; Wang, Mingming; Yuan, Jianping

    2018-04-01

    This paper presents a novel obstacle avoidance constraint and a mixed integer predictive control (MIPC) method for space robots avoiding obstacles and satisfying physical limits during performing tasks. Firstly, a novel kind of obstacle avoidance constraint of space robots, which needs the assumption that the manipulator links and the obstacles can be represented by convex bodies, is proposed by limiting the relative velocity between two closest points which are on the manipulator and the obstacle, respectively. Furthermore, the logical variables are introduced into the obstacle avoidance constraint, which have realized the constraint form is automatically changed to satisfy different obstacle avoidance requirements in different distance intervals between the space robot and the obstacle. Afterwards, the obstacle avoidance constraint and other system physical limits, such as joint angle ranges, the amplitude boundaries of joint velocities and joint torques, are described as inequality constraints of a quadratic programming (QP) problem by using the model predictive control (MPC) method. To guarantee the feasibility of the obtained multi-constraint QP problem, the constraints are treated as soft constraints and assigned levels of priority based on the propositional logic theory, which can realize that the constraints with lower priorities are always firstly violated to recover the feasibility of the QP problem. Since the logical variables have been introduced, the optimization problem including obstacle avoidance and system physical limits as prioritized inequality constraints is termed as MIPC method of space robots, and its computational complexity as well as possible strategies for reducing calculation amount are analyzed. Simulations of the space robot unfolding its manipulator and tracking the end-effector's desired trajectories with the existence of obstacles and physical limits are presented to demonstrate the effectiveness of the proposed obstacle avoidance

  17. A Motion Planning Approach to Automatic Obstacle Avoidance during Concentric Tube Robot Teleoperation.

    PubMed

    Torres, Luis G; Kuntz, Alan; Gilbert, Hunter B; Swaney, Philip J; Hendrick, Richard J; Webster, Robert J; Alterovitz, Ron

    2015-05-01

    Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot's shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot's tip. However, the robot's unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot's shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles.

  18. A Neural Network Approach for Building An Obstacle Detection Model by Fusion of Proximity Sensors Data

    PubMed Central

    Peralta, Emmanuel; Vargas, Héctor; Hermosilla, Gabriel

    2018-01-01

    Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calibration process of this kind of sensor could be a time-consuming task because it is usually done by identification in a manual and repetitive way. The resulting obstacles detection models are usually nonlinear functions that can be different for each proximity sensor attached to the robot. In addition, the model is highly dependent on the type of sensor (e.g., ultrasonic or infrared), on changes in light intensity, and on the properties of the obstacle such as shape, colour, and surface texture, among others. That is why in some situations it could be useful to gather all the measurements provided by different kinds of sensor in order to build a unique model that estimates the distances to the obstacles around the robot. This paper presents a novel approach to get an obstacles detection model based on the fusion of sensors data and automatic calibration by using artificial neural networks. PMID:29495338

  19. Using Thermal Radiation in Detection of Negative Obstacles

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Matthies, Larry H.

    2009-01-01

    A method of automated detection of negative obstacles (potholes, ditches, and the like) ahead of ground vehicles at night involves processing of imagery from thermal-infrared cameras aimed at the terrain ahead of the vehicles. The method is being developed as part of an overall obstacle-avoidance scheme for autonomous and semi-autonomous offroad robotic vehicles. The method could also be applied to help human drivers of cars and trucks avoid negative obstacles -- a development that may entail only modest additional cost inasmuch as some commercially available passenger cars are already equipped with infrared cameras as aids for nighttime operation.

  20. A Motion Planning Approach to Automatic Obstacle Avoidance during Concentric Tube Robot Teleoperation

    PubMed Central

    Torres, Luis G.; Kuntz, Alan; Gilbert, Hunter B.; Swaney, Philip J.; Hendrick, Richard J.; Webster, Robert J.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot’s shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot’s tip. However, the robot’s unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot’s shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles. PMID:26413381

  1. Obstacle-avoiding robot with IR and PIR motion sensors

    NASA Astrophysics Data System (ADS)

    Ismail, R.; Omar, Z.; Suaibun, S.

    2016-10-01

    Obstacle avoiding robot was designed, constructed and programmed which may be potentially used for educational and research purposes. The developed robot will move in a particular direction once the infrared (IR) and the PIR passive infrared (PIR) sensors sense a signal while avoiding the obstacles in its path. The robot can also perform desired tasks in unstructured environments without continuous human guidance. The hardware was integrated in one application board as embedded system design. The software was developed using C++ and compiled by Arduino IDE 1.6.5. The main objective of this project is to provide simple guidelines to the polytechnic students and beginners who are interested in this type of research. It is hoped that this robot could benefit students who wish to carry out research on IR and PIR sensors.

  2. Obstacle negotiation control for a mobile robot suspended on overhead ground wires by optoelectronic sensors

    NASA Astrophysics Data System (ADS)

    Zheng, Li; Yi, Ruan

    2009-11-01

    Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.

  3. Symbiotic Navigation in Multi-Robot Systems with Remote Obstacle Knowledge Sharing

    PubMed Central

    Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori

    2017-01-01

    Large scale operational areas often require multiple service robots for coverage and task parallelism. In such scenarios, each robot keeps its individual map of the environment and serves specific areas of the map at different times. We propose a knowledge sharing mechanism for multiple robots in which one robot can inform other robots about the changes in map, like path blockage, or new static obstacles, encountered at specific areas of the map. This symbiotic information sharing allows the robots to update remote areas of the map without having to explicitly navigate those areas, and plan efficient paths. A node representation of paths is presented for seamless sharing of blocked path information. The transience of obstacles is modeled to track obstacles which might have been removed. A lazy information update scheme is presented in which only relevant information affecting the current task is updated for efficiency. The advantages of the proposed method for path planning are discussed against traditional method with experimental results in both simulation and real environments. PMID:28678193

  4. Obstacle Avoidance On Roadways Using Range Data

    NASA Astrophysics Data System (ADS)

    Dunlay, R. Terry; Morgenthaler, David G.

    1987-02-01

    This report describes range data based obstacle avoidance techniques developed for use on an autonomous road-following robot vehicle. The purpose of these techniques is to detect and locate obstacles present in a road environment for navigation of a robot vehicle equipped with an active laser-based range sensor. Techniques are presented for obstacle detection, obstacle location, and coordinate transformations needed in the construction of Scene Models (symbolic structures representing the 3-D obstacle boundaries used by the vehicle's Navigator for path planning). These techniques have been successfully tested on an outdoor robotic vehicle, the Autonomous Land Vehicle (ALV), at speeds up to 3.5 km/hour.

  5. Method for six-legged robot stepping on obstacles by indirect force estimation

    NASA Astrophysics Data System (ADS)

    Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun

    2016-07-01

    Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.

  6. A bio-inspired kinematic controller for obstacle avoidance during reaching tasks with real robots.

    PubMed

    Srinivasa, Narayan; Bhattacharyya, Rajan; Sundareswara, Rashmi; Lee, Craig; Grossberg, Stephen

    2012-11-01

    This paper describes a redundant robot arm that is capable of learning to reach for targets in space in a self-organized fashion while avoiding obstacles. Self-generated movement commands that activate correlated visual, spatial and motor information are used to learn forward and inverse kinematic control models while moving in obstacle-free space using the Direction-to-Rotation Transform (DIRECT). Unlike prior DIRECT models, the learning process in this work was realized using an online Fuzzy ARTMAP learning algorithm. The DIRECT-based kinematic controller is fault tolerant and can handle a wide range of perturbations such as joint locking and the use of tools despite not having experienced them during learning. The DIRECT model was extended based on a novel reactive obstacle avoidance direction (DIRECT-ROAD) model to enable redundant robots to avoid obstacles in environments with simple obstacle configurations. However, certain configurations of obstacles in the environment prevented the robot from reaching the target with purely reactive obstacle avoidance. To address this complexity, a self-organized process of mental rehearsals of movements was modeled, inspired by human and animal experiments on reaching, to generate plans for movement execution using DIRECT-ROAD in complex environments. These mental rehearsals or plans are self-generated by using the Fuzzy ARTMAP algorithm to retrieve multiple solutions for reaching each target while accounting for all the obstacles in its environment. The key aspects of the proposed novel controller were illustrated first using simple examples. Experiments were then performed on real robot platforms to demonstrate successful obstacle avoidance during reaching tasks in real-world environments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Method for surmounting an obstacle by a robot vehicle

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H. (Inventor); Ohm, Timothy R. (Inventor)

    1994-01-01

    Surmounting obstacles in the path of a robot vehicle is accomplished by rotating the wheel forks of the vehicle about their transverse axes with respect to the vehicle body so as to shift most of the vehicle weight onto the rear wheels, and then driving the vehicle forward so as to drive the now lightly-loaded front wheels (only) over the obstacle. Then, after the front wheels have either surmounted or completely passed the obstacle (depending upon the length of the obstacle), the forks are again rotated about their transverse axes so as to shift most of the vehicle weight onto the front wheels. Then the vehicle is again driven forward so as to drive the now lightly-loaded rear wheels over the obstacle. Once the obstacle has been completely cleared and the vehicle is again on relatively level terrain, the forks are again rotated so as to uniformly distribute the vehicle weight between the front and rear wheels.

  8. Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network

    PubMed Central

    2015-01-01

    For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system. PMID:26089863

  9. Robust obstacle detection for unmanned surface vehicles

    NASA Astrophysics Data System (ADS)

    Qin, Yueming; Zhang, Xiuzhi

    2018-03-01

    Obstacle detection is of essential importance for Unmanned Surface Vehicles (USV). Although some obstacles (e.g., ships, islands) can be detected by Radar, there are many other obstacles (e.g., floating pieces of woods, swimmers) which are difficult to be detected via Radar because these obstacles have low radar cross section. Therefore, detecting obstacle from images taken onboard is an effective supplement. In this paper, a robust vision-based obstacle detection method for USVs is developed. The proposed method employs the monocular image sequence captured by the camera on the USVs and detects obstacles on the sea surface from the image sequence. The experiment results show that the proposed scheme is efficient to fulfill the obstacle detection task.

  10. Vision Based Obstacle Detection in Uav Imaging

    NASA Astrophysics Data System (ADS)

    Badrloo, S.; Varshosaz, M.

    2017-08-01

    Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.

  11. Training toddlers seated on mobile robots to drive indoors amidst obstacles.

    PubMed

    Chen, Xi; Ragonesi, Christina; Galloway, James C; Agrawal, Sunil K

    2011-06-01

    Mobility is a causal factor in development. Children with mobility impairments may rely upon power mobility for independence and thus require advanced driving skills to function independently. Our previous studies show that while infants can learn to drive directly to a goal using conventional joysticks in several months of training, they are unable in this timeframe to acquire the advanced skill to avoid obstacles while driving. Without adequate driving training, children are unable to explore the environment safely, the consequences of which may in turn increase their risk for developmental delay. The goal of this research therefore is to train children seated on mobile robots to purposefully and safely drive indoors. In this paper, we present results where ten typically-developing toddlers are trained to drive a robot within an obstacle course. We also report a case study with a toddler with spina-bifida who cannot independently walk. Using algorithms based on artificial potential fields to avoid obstacles, we create force field on the joystick that trains the children to navigate while avoiding obstacles. In this "assist-as-needed" approach, if the child steers the joystick outside a force tunnel centered on the desired direction, the driver experiences a bias force on the hand. Our results suggest that the use of a force-feedback joystick may yield faster learning than the use of a conventional joystick.

  12. Application of Optical Flow Sensors for Dead Reckoning, Heading Reference, Obstacle Detection, and Obstacle Avoidance

    DTIC Science & Technology

    2015-09-01

    OPTICAL FLOW SENSORS FOR DEAD RECKONING, HEADING REFERENCE, OBSTACLE DETECTION, AND OBSTACLE AVOIDANCE by Tarek M. Nejah September 2015...SENSORS FOR DEAD RECKONING, HEADING REFERENCE, OBSTACLE DETECTION, AND OBSTACLE AVOIDANCE 5. FUNDING NUMBERS 6. AUTHOR(S) Nejah, Tarek M. 7...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) A novel approach for dead reckoning, heading reference, obstacle detection, and obstacle

  13. Negative obstacle detection by thermal signature

    NASA Technical Reports Server (NTRS)

    Matthies, Larry; Rankin, A.

    2003-01-01

    Detecting negative obstacles (ditches, potholes, and other depressions) is one of the most difficult problems in perception for autonomous, off-road navigation. Past work has largely relied on range imagery, because that is based on the geometry of the obstacle, is largely insensitive to illumination variables, and because there have not been other reliable alternatives. However, the visible aspect of negative obstacles shrinks rapidly with range, making them impossible to detect in time to avoid them at high speed. To relive this problem, we show that the interiors of negative obstacles generally remain warmer than the surrounding terrain throughout the night, making thermal signature a stable property for night-time negative obstacle detection. Experimental results to date have achieved detection distances 45% greater by using thermal signature than by using range data alone. Thermal signature is the first known observable with potential to reveal a deep negative obstacle without actually seeing far into it. Modeling solar illumination has potential to extend the usefulness of thermal signature through daylight hours.

  14. A Method on Dynamic Path Planning for Robotic Manipulator Autonomous Obstacle Avoidance Based on an Improved RRT Algorithm.

    PubMed

    Wei, Kun; Ren, Bingyin

    2018-02-13

    In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.

  15. Inertial navigation sensor integrated obstacle detection system

    NASA Technical Reports Server (NTRS)

    Bhanu, Bir (Inventor); Roberts, Barry A. (Inventor)

    1992-01-01

    A system that incorporates inertial sensor information into optical flow computations to detect obstacles and to provide alternative navigational paths free from obstacles. The system is a maximally passive obstacle detection system that makes selective use of an active sensor. The active detection typically utilizes a laser. Passive sensor suite includes binocular stereo, motion stereo and variable fields-of-view. Optical flow computations involve extraction, derotation and matching of interest points from sequential frames of imagery, for range interpolation of the sensed scene, which in turn provides obstacle information for purposes of safe navigation.

  16. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System

    PubMed Central

    Milde, Moritz B.; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware. PMID:28747883

  17. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System.

    PubMed

    Milde, Moritz B; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.

  18. A stereo vision-based obstacle detection system in vehicles

    NASA Astrophysics Data System (ADS)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  19. SDRE controller for motion design of cable-suspended robot with uncertainties and moving obstacles

    NASA Astrophysics Data System (ADS)

    Behboodi, Ahad; Salehi, Seyedmohammad

    2017-10-01

    In this paper an optimal control approach for nonlinear dynamical systems was proposed based on State Dependent Riccati Equation (SDRE) and its robustness against uncertainties is shown by simulation results. The proposed method was applied on a spatial six-cable suspended robot, which was designed to carry loads or perform different tasks in huge workspaces. Motion planning for cable-suspended robots in such a big workspace is subjected to uncertainties and obstacles. First, we emphasized the ability of SDRE to construct a systematic basis and efficient design of controller for wide variety of nonlinear dynamical systems. Then we showed how this systematic design improved the robustness of the system and facilitated the integration of motion planning techniques with the controller. In particular, obstacle avoidance technique based on artificial potential field (APF) can be easily combined with SDRE controller with efficient performance. Due to difficulties of exact solution for SDRE, an approximation method was used based on power series expansion. The efficiency and robustness of the SDRE controller was illustrated on a six-cable suspended robot with proper simulations.

  20. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  1. Obstacle traversal and self-righting of bio-inspired robots reveal the physics of multi-modal locomotion

    NASA Astrophysics Data System (ADS)

    Li, Chen; Fearing, Ronald; Full, Robert

    Most animals move in nature in a variety of locomotor modes. For example, to traverse obstacles like dense vegetation, cockroaches can climb over, push across, reorient their bodies to maneuver through slits, or even transition among these modes forming diverse locomotor pathways; if flipped over, they can also self-right using wings or legs to generate body pitch or roll. By contrast, most locomotion studies have focused on a single mode such as running, walking, or jumping, and robots are still far from capable of life-like, robust, multi-modal locomotion in the real world. Here, we present two recent studies using bio-inspired robots, together with new locomotion energy landscapes derived from locomotor-environment interaction physics, to begin to understand the physics of multi-modal locomotion. (1) Our experiment of a cockroach-inspired legged robot traversing grass-like beam obstacles reveals that, with a terradynamically ``streamlined'' rounded body like that of the insect, robot traversal becomes more probable by accessing locomotor pathways that overcome lower potential energy barriers. (2) Our experiment of a cockroach-inspired self-righting robot further suggests that body vibrations are crucial for exploring locomotion energy landscapes and reaching lower barrier pathways. Finally, we posit that our new framework of locomotion energy landscapes holds promise to better understand and predict multi-modal biological and robotic movement.

  2. Passive detection of subpixel obstacles for flight safety

    NASA Astrophysics Data System (ADS)

    Nixon, Matthew D.; Loveland, Rohan C.

    2001-12-01

    Military aircraft fly below 100 ft. above ground level in support of their missions. These aircraft include fixed and rotary wing and may be manned or unmanned. Flying at these low altitudes presents a safety hazard to the aircrew and aircraft, due to the occurrences of obstacles within the aircraft's flight path. The pilot must rely on eyesight and in some cases, infrared sensors to see obstacles. Many conditions can exacerbate visibility creating a situation in which obstacles are essentially invisible, creating a safety hazard, even to an alerted aircrew. Numerous catastrophic accidents have occurred in which aircraft have collided with undetected obstacles. Accidents of this type continue to be a problem for low flying military and commercial aircraft. Unmanned Aerial Vehicles (UAVs) have the same problem, whether operating autonomously or under control of a ground operator. Boeing-SVS has designed a passive, small, low- cost (under $100k) gimbaled, infrared imaging based system with advanced obstacle detection algorithms. Obstacles are detected in the infrared band, and linear features are analyzed by innovative cellular automata based software. These algorithms perform detection and location of sub-pixel linear features. The detection of the obstacles is performed on a frame by frame basis, in real time. Processed images are presented to the aircrew on their display as color enhanced features. The system has been designed such that the detected obstacles are displayed to the aircrew in sufficient time to react and maneuver the aircraft to safety. A patent for this system is on file with the US patent office, and all material herein should be treated accordingly.

  3. Visually guided gait modifications for stepping over an obstacle: a bio-inspired approach.

    PubMed

    Silva, Pedro; Matos, Vitor; Santos, Cristina P

    2014-02-01

    There is an increasing interest in conceiving robotic systems that are able to move and act in an unstructured and not predefined environment, for which autonomy and adaptability are crucial features. In nature, animals are autonomous biological systems, which often serve as bio-inspiration models, not only for their physical and mechanical properties, but also their control structures that enable adaptability and autonomy-for which learning is (at least) partially responsible. This work proposes a system which seeks to enable a quadruped robot to online learn to detect and to avoid stumbling on an obstacle in its path. The detection relies in a forward internal model that estimates the robot's perceptive information by exploring the locomotion repetitive nature. The system adapts the locomotion in order to place the robot optimally before attempting to step over the obstacle, avoiding any stumbling. Locomotion adaptation is achieved by changing control parameters of a central pattern generator (CPG)-based locomotion controller. The mechanism learns the necessary alterations to the stride length in order to adapt the locomotion by changing the required CPG parameter. Both learning tasks occur online and together define a sensorimotor map, which enables the robot to learn to step over the obstacle in its path. Simulation results show the feasibility of the proposed approach.

  4. Obstacle avoidance and concealed target detection using the Army Research Lab ultra-wideband synchronous impulse reconstruction (UWB SIRE) forward imaging radar

    NASA Astrophysics Data System (ADS)

    Nguyen, Lam; Wong, David; Ressler, Marc; Koenig, Francois; Stanton, Brian; Smith, Gregory; Sichina, Jeffrey; Kappra, Karl

    2007-04-01

    The U.S. Army Research Laboratory (ARL), as part of a mission and customer funded exploratory program, has developed a new low-frequency, ultra-wideband (UWB) synthetic aperture radar (SAR) for forward imaging to support the Army's vision of an autonomous navigation system for robotic ground vehicles. These unmanned vehicles, equipped with an array of imaging sensors, will be tasked to help detect man-made obstacles such as concealed targets, enemy minefields, and booby traps, as well as other natural obstacles such as ditches, and bodies of water. The ability of UWB radar technology to help detect concealed objects has been documented in the past and could provide an important obstacle avoidance capability for autonomous navigation systems, which would improve the speed and maneuverability of these vehicles and consequently increase the survivability of the U. S. forces on the battlefield. One of the primary features of the radar is the ability to collect and process data at combat pace in an affordable, compact, and lightweight package. To achieve this, the radar is based on the synchronous impulse reconstruction (SIRE) technique where several relatively slow and inexpensive analog-to-digital (A/D) converters are used to sample the wide bandwidth of the radar signals. We conducted an experiment this winter at Aberdeen Proving Ground (APG) to support the phenomenological studies of the backscatter from positive and negative obstacles for autonomous robotic vehicle navigation, as well as the detection of concealed targets of interest to the Army. In this paper, we briefly describe the UWB SIRE radar and the test setup in the experiment. We will also describe the signal processing and the forward imaging techniques used in the experiment. Finally, we will present imagery of man-made obstacles such as barriers, concertina wires, and mines.

  5. Cooperative Environment Scans Based on a Multi-Robot System

    PubMed Central

    Kwon, Ji-Wook

    2015-01-01

    This paper proposes a cooperative environment scan system (CESS) using multiple robots, where each robot has low-cost range finders and low processing power. To organize and maintain the CESS, a base robot monitors the positions of the child robots, controls them, and builds a map of the unknown environment, while the child robots with low performance range finders provide obstacle information. Even though each child robot provides approximated and limited information of the obstacles, CESS replaces the single LRF, which has a high cost, because much of the information is acquired and accumulated by a number of the child robots. Moreover, the proposed CESS extends the measurement boundaries and detects obstacles hidden behind others. To show the performance of the proposed system and compare this with the numerical models of the commercialized 2D and 3D laser scanners, simulation results are included. PMID:25789491

  6. Robotic guarded motion system and method

    DOEpatents

    Bruemmer, David J.

    2010-02-23

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes instructions for repeating, on each iteration through an event timing loop, the acts of defining an event horizon, detecting a range to obstacles around the robot, and testing for an event horizon intrusion. Defining the event horizon includes determining a distance from the robot that is proportional to a current velocity of the robot and testing for the event horizon intrusion includes determining if any range to the obstacles is within the event horizon. Finally, on each iteration through the event timing loop, the method includes reducing the current velocity of the robot in proportion to a loop period of the event timing loop if the event horizon intrusion occurs.

  7. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    PubMed Central

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  8. Obstacle detection and avoiding of quadcopter

    NASA Astrophysics Data System (ADS)

    Wang, Dizhong; Lin, Jiajian

    2017-10-01

    Recent years, the flight control technology over quadcopter has been boosted vigorously and acquired the comprehensive application in a variety of industries. However, it is prominent for there to be problems existed in the stable and secure flight with the development of its autonomous flight. Through comparing with the characteristics of ultrasonic ranging and laser Time-of-Flight(abbreviated to ToF) distance as well as vision measurement and its related sensors, the obstacle detection and identification sensors need to be installed in order to effectively enhance the safety flying for aircraft, which is essential for avoiding the dangers around the surroundings. That the major sensors applied to objects perception at present are distance measuring instruments which based on the principle and application of non-contact detection technology . Prior to acknowledging the general principles of flight and obstacle avoiding, the aerodynamics modeling of the quadcopter and its object detection means has been initially determined on this paper. Based on such premise, this article emphasized on describing and analyzing the research on obstacle avoiding technology and its application status, and making an expectation for the trend of its development after analyzing the primary existing problems concerning its accuracy object avoidance.

  9. Radial polar histogram: obstacle avoidance and path planning for robotic cognition and motion control

    NASA Astrophysics Data System (ADS)

    Wang, Po-Jen; Keyawa, Nicholas R.; Euler, Craig

    2012-01-01

    In order to achieve highly accurate motion control and path planning for a mobile robot, an obstacle avoidance algorithm that provided a desired instantaneous turning radius and velocity was generated. This type of obstacle avoidance algorithm, which has been implemented in California State University Northridge's Intelligent Ground Vehicle (IGV), is known as Radial Polar Histogram (RPH). The RPH algorithm utilizes raw data in the form of a polar histogram that is read from a Laser Range Finder (LRF) and a camera. A desired open block is determined from the raw data utilizing a navigational heading and an elliptical approximation. The left and right most radii are determined from the calculated edges of the open block and provide the range of possible radial paths the IGV can travel through. In addition, the calculated obstacle edge positions allow the IGV to recognize complex obstacle arrangements and to slow down accordingly. A radial path optimization function calculates the best radial path between the left and right most radii and is sent to motion control for speed determination. Overall, the RPH algorithm allows the IGV to autonomously travel at average speeds of 3mph while avoiding all obstacles, with a processing time of approximately 10ms.

  10. Stochastic performance modeling and evaluation of obstacle detectability with imaging range sensors

    NASA Technical Reports Server (NTRS)

    Matthies, Larry; Grandjean, Pierrick

    1993-01-01

    Statistical modeling and evaluation of the performance of obstacle detection systems for Unmanned Ground Vehicles (UGVs) is essential for the design, evaluation, and comparison of sensor systems. In this report, we address this issue for imaging range sensors by dividing the evaluation problem into two levels: quality of the range data itself and quality of the obstacle detection algorithms applied to the range data. We review existing models of the quality of range data from stereo vision and AM-CW LADAR, then use these to derive a new model for the quality of a simple obstacle detection algorithm. This model predicts the probability of detecting obstacles and the probability of false alarms, as a function of the size and distance of the obstacle, the resolution of the sensor, and the level of noise in the range data. We evaluate these models experimentally using range data from stereo image pairs of a gravel road with known obstacles at several distances. The results show that the approach is a promising tool for predicting and evaluating the performance of obstacle detection with imaging range sensors.

  11. On autonomous terrain model acquistion by a mobile robot

    NASA Technical Reports Server (NTRS)

    Rao, N. S. V.; Iyengar, S. S.; Weisbin, C. R.

    1987-01-01

    The following problem is considered: A point robot is placed in a terrain populated by an unknown number of polyhedral obstacles of varied sizes and locations in two/three dimensions. The robot is equipped with a sensor capable of detecting all the obstacle vertices and edges that are visible from the present location of the robot. The robot is required to autonomously navigate and build the complete terrain model using the sensor information. It is established that the necessary number of scanning operations needed for complete terrain model acquisition by any algorithm that is based on scan from vertices strategy is given by the summation of i = 1 (sup n) N(O sub i)-n and summation of i = 1 (sup n) N(O sub i)-2n in two- and three-dimensional terrains respectively, where O = (O sub 1, O sub 2,....O sub n) set of the obstacles in the terrain, and N(O sub i) is the number of vertices of the obstacle O sub i.

  12. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  13. FieldSAFE: Dataset for Obstacle Detection in Agriculture.

    PubMed

    Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik; Jørgensen, Rasmus Nyholm

    2017-11-09

    In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.

  14. FieldSAFE: Dataset for Obstacle Detection in Agriculture

    PubMed Central

    Christiansen, Peter; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik

    2017-01-01

    In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates. PMID:29120383

  15. Autonomous caregiver following robotic wheelchair

    NASA Astrophysics Data System (ADS)

    Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary

    2011-12-01

    In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.

  16. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF

    NASA Astrophysics Data System (ADS)

    Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James

    A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.

  17. Teleautonomous guidance for mobile robots

    NASA Technical Reports Server (NTRS)

    Borenstein, J.; Koren, Y.

    1990-01-01

    Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.

  18. Hierarchical Shared Control of Cane-Type Walking-Aid Robot

    PubMed Central

    Tao, Chunjing

    2017-01-01

    A hierarchical shared-control method of the walking-aid robot for both human motion intention recognition and the obstacle emergency-avoidance method based on artificial potential field (APF) is proposed in this paper. The human motion intention is obtained from the interaction force measurements of the sensory system composed of 4 force-sensing registers (FSR) and a torque sensor. Meanwhile, a laser-range finder (LRF) forward is applied to detect the obstacles and try to guide the operator based on the repulsion force calculated by artificial potential field. An obstacle emergency-avoidance method which comprises different control strategies is also assumed according to the different states of obstacles or emergency cases. To ensure the user's safety, the hierarchical shared-control method combines the intention recognition method with the obstacle emergency-avoidance method based on the distance between the walking-aid robot and the obstacles. At last, experiments validate the effectiveness of the proposed hierarchical shared-control method. PMID:29093805

  19. Collision recognition and direction changes for small scale fish robots by acceleration sensors

    NASA Astrophysics Data System (ADS)

    Na, Seung Y.; Shin, Daejung; Kim, Jin Y.; Lee, Bae-Ho

    2005-05-01

    Typical obstacles are walls, rocks, water plants and other nearby robots for a group of small scale fish robots and submersibles that have been constructed in our lab. Sonar sensors are not employed to make the robot structure simple enough. All of circuits, sensors and processor cards are contained in a box of 9 x 7 x 4 cm dimension except motors, fins and external covers. Therefore, image processing results are applied to avoid collisions. However, it is useful only when the obstacles are located far enough to give images processing time for detecting them. Otherwise, acceleration sensors are used to detect collision immediately after it happens. Two of 2-axes acceleration sensors are employed to measure the three components of collision angles, collision magnitudes, and the angles of robot propulsion. These data are integrated to calculate the amount of propulsion direction change. The angle of a collision incident upon an obstacle is the fundamental value to obtain a direction change needed to design a following path. But there is a significant amount of noise due to a caudal fin motor. Because caudal fin provides the main propulsion for a fish robot, there is a periodic swinging noise at the head of a robot. This noise provides a random acceleration effect on the measured acceleration data at the collision. We propose an algorithm which shows that the MEMS-type accelerometers are very effective to provide information for direction changes in spite of the intrinsic noise after the small scale fish robots have made obstacle collision.

  20. Dynamic path planning for mobile robot based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Cai, Feng; Wang, Ying

    2017-08-01

    In the contemporary, robots are used in many fields, such as cleaning, medical treatment, space exploration, disaster relief and so on. The dynamic path planning of robot without collision is becoming more and more the focus of people's attention. A new method of path planning is proposed in this paper. Firstly, the motion space model of the robot is established by using the MAKLINK graph method. Then the A* algorithm is used to get the shortest path from the start point to the end point. Secondly, this paper proposes an effective method to detect and avoid obstacles. When an obstacle is detected on the shortest path, the robot will choose the nearest safety point to move. Moreover, calculate the next point which is nearest to the target. Finally, the particle swarm optimization algorithm is used to optimize the path. The experimental results can prove that the proposed method is more effective.

  1. Real-time obstacle avoidance using harmonic potential functions

    NASA Technical Reports Server (NTRS)

    Kim, Jin-Oh; Khosla, Pradeep K.

    1992-01-01

    This paper presents a new formulation of the artificial potential approach to the obstacle avoidance problem for a mobile robot or a manipulator in a known environment. Previous formulations of artificial potentials for obstacle avoidance have exhibited local minima in a cluttered environment. To build an artificial potential field, harmonic functions that completely eliminate local minima even for a cluttered environment are used. The panel method is employed to represent arbitrarily shaped obstacles and to derive the potential over the whole space. Based on this potential function, an elegant control strategy is proposed for the real-time control of a robot. The harmonic potential, the panel method, and the control strategy are tested with a bar-shaped mobile robot and a three-degree-of-freedom planar redundant manipulator.

  2. ARK: Autonomous mobile robot in an industrial environment

    NASA Technical Reports Server (NTRS)

    Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.

    1994-01-01

    This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.

  3. Assessment of a simple obstacle detection device for the visually impaired.

    PubMed

    Lee, Cheng-Lung; Chen, Chih-Yung; Sung, Peng-Cheng; Lu, Shih-Yi

    2014-07-01

    A simple obstacle detection device, based upon an automobile parking sensor, was assessed as a mobility aid for the visually impaired. A questionnaire survey for mobility needs was performed at the start of this study. After the detector was developed, five blindfolded sighted and 15 visually impaired participants were invited to conduct travel experiments under three test conditions: (1) using a white cane only, (2) using the obstacle detector only and (3) using both devices. A post-experiment interview regarding the usefulness of the obstacle detector for the visually impaired participants was performed. The results showed that the obstacle detector could augment mobility performance with the white cane. The obstacle detection device should be used in conjunction with the white cane to achieve the best mobility speed and body protection. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. Detection of Obstacles in Monocular Image Sequences

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia

    1997-01-01

    The ability to detect and locate runways/taxiways and obstacles in images captured using on-board sensors is an essential first step in the automation of low-altitude flight, landing, takeoff, and taxiing phase of aircraft navigation. Automation of these functions under different weather and lighting situations, can be facilitated by using sensors of different modalities. An aircraft-based Synthetic Vision System (SVS), with sensors of different modalities mounted on-board, complements the current ground-based systems in functions such as detection and prevention of potential runway collisions, airport surface navigation, and landing and takeoff in all weather conditions. In this report, we address the problem of detection of objects in monocular image sequences obtained from two types of sensors, a Passive Millimeter Wave (PMMW) sensor and a video camera mounted on-board a landing aircraft. Since the sensors differ in their spatial resolution, and the quality of the images obtained using these sensors is not the same, different approaches are used for detecting obstacles depending on the sensor type. These approaches are described separately in two parts of this report. The goal of the first part of the report is to develop a method for detecting runways/taxiways and objects on the runway in a sequence of images obtained from a moving PMMW sensor. Since the sensor resolution is low and the image quality is very poor, we propose a model-based approach for detecting runways/taxiways. We use the approximate runway model and the position information of the camera provided by the Global Positioning System (GPS) to define regions of interest in the image plane to search for the image features corresponding to the runway markers. Once the runway region is identified, we use histogram-based thresholding to detect obstacles on the runway and regions outside the runway. This algorithm is tested using image sequences simulated from a single real PMMW image.

  5. A method of real-time detection for distant moving obstacles by monocular vision

    NASA Astrophysics Data System (ADS)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  6. Assisting the visually impaired: obstacle detection and warning system by acoustic feedback.

    PubMed

    Rodríguez, Alberto; Yebes, J Javier; Alcantarilla, Pablo F; Bergasa, Luis M; Almazán, Javier; Cela, Andrés

    2012-12-17

    The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system.

  7. Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback

    PubMed Central

    Rodríguez, Alberto; Yebes, J. Javier; Alcantarilla, Pablo F.; Bergasa, Luis M.; Almazán, Javier; Cela, Andrés

    2012-01-01

    The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system. PMID:23247413

  8. Reflexive obstacle avoidance for kinematically-redundant manipulators

    NASA Technical Reports Server (NTRS)

    Karlen, James P.; Thompson, Jack M., Jr.; Farrell, James D.; Vold, Havard I.

    1989-01-01

    Dexterous telerobots incorporating 17 or more degrees of freedom operating under coordinated, sensor-driven computer control will play important roles in future space operations. They will also be used on Earth in assignments like fire fighting, construction and battlefield support. A real time, reflexive obstacle avoidance system, seen as a functional requirement for such massively redundant manipulators, was developed using arm-mounted proximity sensors to control manipulator pose. The project involved a review and analysis of alternative proximity sensor technologies for space applications, the development of a general-purpose algorithm for synthesizing sensor inputs, and the implementation of a prototypical system for demonstration and testing. A 7 degree of freedom Robotics Research K-2107HR manipulator was outfitted with ultrasonic proximity sensors as a testbed, and Robotics Research's standard redundant motion control algorithm was modified such that an object detected by sensor arrays located at the elbow effectively applies a force to the manipulator elbow, normal to the axis. The arm is repelled by objects detected by the sensors, causing the robot to steer around objects in the workspace automatically while continuing to move its tool along the commanded path without interruption. The mathematical approach formulated for synthesizing sensor inputs can be employed for redundant robots of any kinematic configuration.

  9. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  10. Assistive obstacle detection and navigation devices for vision-impaired users.

    PubMed

    Ong, S K; Zhang, J; Nee, A Y C

    2013-09-01

    Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.

  11. State-of-the-art technologies for intrusion and obstacle detection for railroad operations

    DOT National Transportation Integrated Search

    2007-07-01

    This report provides an update on the state-of-the-art technologies with intrusion and obstacle detection capabilities for rail rights of way (ROW) and crossings. A workshop entitled Intruder and Obstacle Detection Systems (IODS) for Railroads Requir...

  12. Characterization of the Hokuyo URG-04LX laser rangefinder for mobile robot obstacle negotiation

    NASA Astrophysics Data System (ADS)

    Okubo, Yoichi; Ye, Cang; Borenstein, Johann

    2009-05-01

    This paper presents a characterization study of the Hokuyo URG-04LX scanning laser rangefinder (LRF). The Hokuyo LRF is similar in function to the Sick LRF, which has been the de-facto standard range sensor for mobile robot obstacle avoidance and mapping applications for the last decade. Problems with the Sick LRF are its relatively large size, weight, and power consumption, allowing its use only on relatively large mobile robots. The Hokuyo LRF is substantially smaller, lighter, and consumes less power, and is therefore more suitable for small mobile robots. The question is whether it performs just as well as the Sick LRF in typical mobile robot applications. In 2002, two of the authors of the present paper published a characterization study of the Sick LRF. For the present paper we used the exact same test apparatus and test procedures as we did in the 2002 paper, but this time to characterize the Hokuyo LRF. As a result, we are in the unique position of being able to provide not only a detailed characterization study of the Hokuyo LRF, but also to compare the Hokuyo LRF with the Sick LRF under identical test conditions. Among the tested characteristics are sensitivity to a variety of target surface properties and incidence angles, which may potentially affect the sensing performance. We also discuss the performance of the Hokuyo LRF with regard to the mixed pixels problem associated with LRFs. Lastly, the present paper provides a calibration model for improving the accuracy of the Hokuyo LRF.

  13. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    PubMed Central

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-01-01

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works. PMID:28481277

  14. Study and realization of an obstacle detection infrared system for automotive use

    NASA Astrophysics Data System (ADS)

    Alaouiamine, Mohammed

    1991-08-01

    The main technological options available in the field of obstacle detection are presented. Ultrasound, microwave, and infrared detection systems are reviewed. The reasons for choosing an infrared solution are outlined. The problems involved in developing an obstacle detection system in the near infrared are discussed. Weather condition effects, interference limitations due to multiple onboard sensors, and range detection influence are some of the problems studied. A collimated, mechanically scanned, and pulsed infrared beam is proposed to overcome some of these problems. Performances of a first and second prototype made using this system are presented.

  15. Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control

    PubMed Central

    Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda

    2017-01-01

    Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations. PMID:28406449

  16. Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control.

    PubMed

    Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda

    2017-04-13

    Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations.

  17. Interaction dynamics of multiple autonomous mobile robots in bounded spatial domains

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1989-01-01

    A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.

  18. Research on the inspection robot for cable tunnel

    NASA Astrophysics Data System (ADS)

    Xin, Shihao

    2017-03-01

    Robot by mechanical obstacle, double end communication, remote control and monitoring software components. The mechanical obstacle part mainly uses the tracked mobile robot mechanism, in order to facilitate the design and installation of the robot, the other auxiliary swing arm; double side communication part used a combination of communication wire communication with wireless communication, great improve the communication range of the robot. When the robot is controlled by far detection range, using wired communication control, on the other hand, using wireless communication; remote control part mainly completes the inspection robot walking, navigation, positioning and identification of cloud platform control. In order to improve the reliability of its operation, the preliminary selection of IPC as the control core the movable body selection program hierarchical structure as a design basis; monitoring software part is the core part of the robot, which has a definite diagnosis Can be instead of manual simple fault judgment, instead the robot as a remote actuators, staff as long as the remote control can be, do not have to body at the scene. Four parts are independent of each other but are related to each other, the realization of the structure of independence and coherence, easy maintenance and coordination work. Robot with real-time positioning function and remote control function, greatly improves the IT operation. Robot remote monitor, to avoid the direct contact with the staff and line, thereby reducing the accident casualties, for the safety of the inspection work has far-reaching significance.

  19. The application of Markov decision process in restaurant delivery robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.

  20. Fast and reliable obstacle detection and segmentation for cross-country navigation

    NASA Technical Reports Server (NTRS)

    Talukder, A.; Manduchi, R.; Rankin, A.; Matthies, L.

    2002-01-01

    Obstacle detection is one of the main components of the control system of autonomous vehicles. In the case of indoor/urban navigation, obstacles are typically defined as surface points that are higher than the ground plane. This characterization, however, cannot be used in cross-country and unstructured environments, where the notion of ground plane is often not meaningful.

  1. Numerical approach of collision avoidance and optimal control on robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wang, Jyhshing Jack

    1990-01-01

    Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.

  2. Generic, scalable and decentralized fault detection for robot swarms.

    PubMed

    Tarapore, Danesh; Christensen, Anders Lyhne; Timmis, Jon

    2017-01-01

    Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system's capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation.

  3. Generic, scalable and decentralized fault detection for robot swarms

    PubMed Central

    Christensen, Anders Lyhne; Timmis, Jon

    2017-01-01

    Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system’s capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation. PMID:28806756

  4. Research on the attitude detection technology of the tetrahedron robot

    NASA Astrophysics Data System (ADS)

    Gong, Hao; Chen, Keshan; Ren, Wenqiang; Cai, Xin

    2017-10-01

    The traditional attitude detection technology can't tackle the problem of attitude detection of the polyhedral robot. Thus we propose a novel algorithm of multi-sensor data fusion which is based on Kalman filter. In the algorithm a tetrahedron robot is investigated. We devise an attitude detection system for the polyhedral robot and conduct the verification of data fusion algorithm. It turns out that the minimal attitude detection system we devise could capture attitudes of the tetrahedral robot in different working conditions. Thus the Kinematics model we establish for the tetrahedron robot is correct and the feasibility of the attitude detection system is proven.

  5. Robotic follow system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Anderson, Matthew O [Idaho Falls, ID

    2007-05-01

    Robot platforms, methods, and computer media are disclosed. The robot platform includes perceptors, locomotors, and a system controller, which executes instructions for a robot to follow a target in its environment. The method includes receiving a target bearing and sensing whether the robot is blocked front. If the robot is blocked in front, then the robot's motion is adjusted to avoid the nearest obstacle in front. If the robot is not blocked in front, then the method senses whether the robot is blocked toward the target bearing and if so, sets the rotational direction opposite from the target bearing, and adjusts the rotational velocity and translational velocity. If the robot is not blocked toward the target bearing, then the rotational velocity is adjusted proportional to an angle of the target bearing and the translational velocity is adjusted proportional to a distance to the nearest obstacle in front.

  6. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control

    NASA Astrophysics Data System (ADS)

    Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash

    2012-06-01

    This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.

  7. Obstacle Detection Algorithms for Rotorcraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)

    2001-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.

  8. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments.

    PubMed

    Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori

    2017-08-15

    Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.

  9. Object Detection Techniques Applied on Mobile Robot Semantic Navigation

    PubMed Central

    Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto

    2014-01-01

    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency. PMID:24732101

  10. Robotic Follow Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  11. Evolutionary Design of a Robotic Material Defect Detection System

    NASA Technical Reports Server (NTRS)

    Ballard, Gary; Howsman, Tom; Craft, Mike; ONeil, Daniel; Steincamp, Jim; Howell, Joe T. (Technical Monitor)

    2002-01-01

    During the post-flight inspection of SSME engines, several inaccessible regions must be disassembled to inspect for defects such as cracks, scratches, gouges, etc. An improvement to the inspection process would be the design and development of very small robots capable of penetrating these inaccessible regions and detecting the defects. The goal of this research was to utilize an evolutionary design approach for the robotic detection of these types of defects. A simulation and visualization tool was developed prior to receiving the hardware as a development test bed. A small, commercial off-the-shelf (COTS) robot was selected from several candidates as the proof of concept robot. The basic approach to detect the defects was to utilize Cadmium Sulfide (CdS) sensors to detect changes in contrast of an illuminated surface. A neural network, optimally designed utilizing a genetic algorithm, was employed to detect the presence of the defects (cracks). By utilization of the COTS robot and US sensors, the research successfully demonstrated that an evolutionarily designed neural network can detect the presence of surface defects.

  12. Velodyne HDL-64E lidar for unmanned surface vehicle obstacle detection

    NASA Astrophysics Data System (ADS)

    Halterman, Ryan; Bruch, Michael

    2010-04-01

    The Velodyne HDL-64E is a 64 laser 3D (360×26.8 degree) scanning LIDAR. It was designed to fill perception needs of DARPA Urban Challenge vehicles. As such, it was principally intended for ground use. This paper presents the performance of the HDL-64E as it relates to the marine environment for unmanned surface vehicle (USV) obstacle detection and avoidance. We describe the sensor's capacity for discerning relevant objects at sea- both through subjective observations of the raw data and through a rudimentary automated obstacle detection algorithm. We also discuss some of the complications that have arisen with the sensor.

  13. Modular Countermine Payload for Small Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman Herman; Doug Few; Roelof Versteeg

    2010-04-01

    Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processormore » that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multi-mission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.« less

  14. Modular countermine payload for small robots

    NASA Astrophysics Data System (ADS)

    Herman, Herman; Few, Doug; Versteeg, Roelof; Valois, Jean-Sebastien; McMahill, Jeff; Licitra, Michael; Henciak, Edward

    2010-04-01

    Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processor that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multimission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.

  15. Advanced robot locomotion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neely, Jason C.; Sturgis, Beverly Rainwater; Byrne, Raymond Harry

    This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics naturalmore » human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.« less

  16. Autonomous Navigation by a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Aghazarian, Hrand

    2005-01-01

    ROAMAN is a computer program for autonomous navigation of a mobile robot on a long (as much as hundreds of meters) traversal of terrain. Developed for use aboard a robotic vehicle (rover) exploring the surface of a remote planet, ROAMAN could also be adapted to similar use on terrestrial mobile robots. ROAMAN implements a combination of algorithms for (1) long-range path planning based on images acquired by mast-mounted, wide-baseline stereoscopic cameras, and (2) local path planning based on images acquired by body-mounted, narrow-baseline stereoscopic cameras. The long-range path-planning algorithm autonomously generates a series of waypoints that are passed to the local path-planning algorithm, which plans obstacle-avoiding legs between the waypoints. Both the long- and short-range algorithms use an occupancy-grid representation in computations to detect obstacles and plan paths. Maps that are maintained by the long- and short-range portions of the software are not shared because substantial localization errors can accumulate during any long traverse. ROAMAN is not guaranteed to generate an optimal shortest path, but does maintain the safety of the rover.

  17. Hopping Robot with Wheels

    NASA Technical Reports Server (NTRS)

    Barlow, Edward; Marzwell, Nevellie; Fuller, Sawyer; Fionni, Paolo; Tretton, Andy; Burdick, Joel; Schell, Steve

    2003-01-01

    A small prototype mobile robot is capable of (1) hopping to move rapidly or avoid obstacles and then (2) moving relatively slowly and precisely on the ground by use of wheels in the manner of previously reported exploratory robots of the "rover" type. This robot is a descendant of a more primitive hopping robot described in "Minimally Actuated Hopping Robot" (NPO- 20911), NASA Tech Briefs, Vol. 26, No. 11 (November 2002), page 50. There are many potential applications for robots with hopping and wheeled-locomotion (roving) capabilities in diverse fields of endeavor, including agriculture, search-and-rescue operations, general military operations, removal or safe detonation of land mines, inspection, law enforcement, and scientific exploration on Earth and remote planets. The combination of hopping and roving enables this robot to move rapidly over very rugged terrain, to overcome obstacles several times its height, and then to position itself precisely next to a desired target. Before a long hop, the robot aims itself in the desired hopping azimuth and at a desired takeoff angle above horizontal. The robot approaches the target through a series of hops and short driving operations utilizing the steering wheels for precise positioning.

  18. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments

    PubMed Central

    Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori

    2017-01-01

    Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803

  19. Neural network-based landmark detection for mobile robot

    NASA Astrophysics Data System (ADS)

    Sekiguchi, Minoru; Okada, Hiroyuki; Watanabe, Nobuo

    1996-03-01

    The mobile robot can essentially have only the relative position data for the real world. However, there are many cases that the robot has to know where it is located. In those cases, the useful method is to detect landmarks in the real world and adjust its position using detected landmarks. In this point of view, it is essential to develop a mobile robot that can accomplish the path plan successfully using natural or artificial landmarks. However, artificial landmarks are often difficult to construct and natural landmarks are very complicated to detect. In this paper, the method of acquiring landmarks by using the sensor data from the mobile robot necessary for planning the path is described. The landmark we discuss here is the natural one and is composed of the compression of sensor data from the robot. The sensor data is compressed and memorized by using five layered neural network that is called a sand glass model. The input and output data that neural network should learn is the sensor data of the robot that are exactly the same. Using the intermediate output data of the network, a compressed data is obtained, which expresses a landmark data. If the sensor data is ambiguous or enormous, it is easy to detect the landmark because the data is compressed and classified by the neural network. Using the backward three layers, the compressed landmark data is expanded to original data at some level. The studied neural network categorizes the detected sensor data to the known landmark.

  20. Obstacle avoidance system with sonar sensing and fuzzy logic

    NASA Astrophysics Data System (ADS)

    Chiang, Wen-chuan; Kelkar, Nikhal; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of an obstacle avoidance system using sonar sensors for a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. The obstacle avoidance system is based on a micro-controller interfaced with multiple ultrasonic transducers. This micro-controller independently handles all timing and distance calculations and sends a distance measurement back to the computer via the serial line. This design yields a portable independent system. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous obstacle avoidance controller applicable for any mobile vehicle with only minor adaptations.

  1. Hopping robot

    DOEpatents

    Spletzer, Barry L.; Fischer, Gary J.; Marron, Lisa C.; Martinez, Michael A.; Kuehl, Michael A.; Feddema, John T.

    2001-01-01

    The present invention provides a hopping robot that includes a misfire tolerant linear actuator suitable for long trips, low energy steering and control, reliable low energy righting, miniature low energy fuel control. The present invention provides a robot with hopping mobility, capable of traversing obstacles significant in size relative to the robot and capable of operation on unpredictable terrain over long range. The present invention further provides a hopping robot with misfire-tolerant combustion actuation, and with combustion actuation suitable for use in oxygen-poor environments.

  2. Application of ant colony algorithm in path planning of the data center room robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Ma, Jianming; Wang, Ying

    2017-05-01

    According to the Internet Data Center (IDC) room patrol robot as the background, the robot in the search path of autonomous obstacle avoidance and path planning ability, worked out in advance of the robot room patrol mission. The simulation experimental results show that the improved ant colony algorithm for IDC room patrol robot obstacle avoidance planning, makes the robot along an optimal or suboptimal and safe obstacle avoidance path to reach the target point to complete the task. To prove the feasibility of the method.

  3. Guarded Motion for Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The Idaho National Laboratory (INL) has created codes that ensure that a robot will come to a stop at a precise, specified distance from any obstacle regardless of the robot's initial speed, its physical characteristics, and the responsiveness of the low-level motor control schema. This Guarded Motion for Mobile Robots system iteratively adjusts the robot's action in response to information about the robot's environment.

  4. Range Sensor-Based Efficient Obstacle Avoidance through Selective Decision-Making.

    PubMed

    Shim, Youngbo; Kim, Gon-Woo

    2018-03-29

    In this paper, we address a collision avoidance method for mobile robots. Many conventional obstacle avoidance methods have been focused solely on avoiding obstacles. However, this can cause instability when passing through a narrow passage, and can also generate zig-zag motions. We define two strategies for obstacle avoidance, known as Entry mode and Bypass mode. Entry mode is a pattern for passing through the gap between obstacles, while Bypass mode is a pattern for making a detour around obstacles safely. With these two modes, we propose an efficient obstacle avoidance method based on the Expanded Guide Circle (EGC) method with selective decision-making. The simulation and experiment results show the validity of the proposed method.

  5. Multisensor-based human detection and tracking for mobile service robots.

    PubMed

    Bellotto, Nicola; Hu, Huosheng

    2009-02-01

    One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.

  6. Integrating Millimeter Wave Radar with a Monocular Vision Sensor for On-Road Obstacle Detection Applications

    PubMed Central

    Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng

    2011-01-01

    This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible. PMID:22164117

  7. Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications.

    PubMed

    Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng

    2011-01-01

    This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.

  8. Optimal path planning for a mobile robot using cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Mohanty, Prases K.; Parhi, Dayal R.

    2016-03-01

    The shortest/optimal path planning is essential for efficient operation of autonomous vehicles. In this article, a new nature-inspired meta-heuristic algorithm has been applied for mobile robot path planning in an unknown or partially known environment populated by a variety of static obstacles. This meta-heuristic algorithm is based on the levy flight behaviour and brood parasitic behaviour of cuckoos. A new objective function has been formulated between the robots and the target and obstacles, which satisfied the conditions of obstacle avoidance and target-seeking behaviour of robots present in the terrain. Depending upon the objective function value of each nest (cuckoo) in the swarm, the robot avoids obstacles and proceeds towards the target. The smooth optimal trajectory is framed with this algorithm when the robot reaches its goal. Some simulation and experimental results are presented at the end of the paper to show the effectiveness of the proposed navigational controller.

  9. Reactive, Safe Navigation for Lunar and Planetary Robots

    NASA Technical Reports Server (NTRS)

    Utz, Hans; Ruland, Thomas

    2008-01-01

    When humans return to the moon, Astronauts will be accompanied by robotic helpers. Enabling robots to safely operate near astronauts on the lunar surface has the potential to significantly improve the efficiency of crew surface operations. Safely operating robots in close proximity to astronauts on the lunar surface requires reactive obstacle avoidance capabilities not available on existing planetary robots. In this paper we present work on safe, reactive navigation using a stereo based high-speed terrain analysis and obstacle avoidance system. Advances in the design of the algorithms allow it to run terrain analysis and obstacle avoidance algorithms at full frame rate (30Hz) on off the shelf hardware. The results of this analysis are fed into a fast, reactive path selection module, enforcing the safety of the chosen actions. The key components of the system are discussed and test results are presented.

  10. A Multimodal Emotion Detection System during Human-Robot Interaction

    PubMed Central

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  11. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field.

    PubMed

    Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik

    2016-11-11

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).

  12. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field

    PubMed Central

    Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik

    2016-01-01

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717

  13. Evolutionary programming-based univector field navigation method for past mobile robots.

    PubMed

    Kim, Y J; Kim, J H; Kwon, D S

    2001-01-01

    Most of navigation techniques with obstacle avoidance do not consider the robot orientation at the target position. These techniques deal with the robot position only and are independent of its orientation and velocity. To solve these problems this paper proposes a novel univector field method for fast mobile robot navigation which introduces a normalized two dimensional vector field. The method provides fast moving robots with the desired posture at the target position and obstacle avoidance. To obtain the sub-optimal vector field, a function approximator is used and trained by evolutionary programming. Two kinds of vector fields are trained, one for the final posture acquisition and the other for obstacle avoidance. Computer simulations and real experiments are carried out for a fast moving mobile robot to demonstrate the effectiveness of the proposed scheme.

  14. Detection And Classification Of Web Robots With Honeypots

    DTIC Science & Technology

    2016-03-01

    CLASSIFICATION OF WEB ROBOTS WITH HONEYPOTS by Sean F. McKenna March 2016 Thesis Advisor: Neil Rowe Second Reader: Justin P. Rohrer THIS...Master’s thesis 4. TITLE AND SUBTITLE DETECTION AND CLASSIFICATION OF WEB ROBOTS WITH HONEYPOTS 5. FUNDING NUMBERS 6. AUTHOR(S) Sean F. McKenna 7...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Web robots are automated programs that systematically browse the Web , collecting information. Although

  15. Explosive vapor detection payload for small robots

    NASA Astrophysics Data System (ADS)

    Stimac, Phil J.; Pettit, Michael; Wetzel, John P.; Haas, John W.

    2013-05-01

    Detection of explosive hazards is a critical component of enabling and improving operational mobility and protection of US Forces. The Autonomous Mine Detection System (AMDS) developed by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is addressing this challenge for dismounted soldiers. Under the AMDS program, ARA has developed a vapor sampling system that enhances the detection of explosive residues using commercial-off-the-shelf (COTS) sensors. The Explosives Hazard Trace Detection (EHTD) payload is designed for plug-and-play installation and operation on small robotic platforms, addressing critical Army needs for more safely detecting concealed or exposed explosives in areas such as culverts, walls and vehicles. In this paper, we describe the development, robotic integration and performance of the explosive vapor sampling system, which consists of a sampling "head," a vapor transport tube and an extendable "boom." The sampling head and transport tube are integrated with the boom, allowing samples to be collected from targeted surfaces up to 7-ft away from the robotic platform. During sample collection, an IR lamp in the sampling head is used to heat a suspected object/surface and the vapors are drawn through the heated vapor transport tube to an ion mobility spectrometer (IMS) for detection. The EHTD payload is capable of quickly (less than 30 seconds) detecting explosives such as TNT, PETN, and RDX at nanogram levels on common surfaces (brick, concrete, wood, glass, etc.).

  16. Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Ullah, Sana; Ullah, Sehat

    2018-01-01

    Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.

  17. Visual environment recognition for robot path planning using template matched filters

    NASA Astrophysics Data System (ADS)

    Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto

    2017-08-01

    A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.

  18. Two Formal Gas Models For Multi-Agent Sweeping and Obstacle Avoidance

    NASA Technical Reports Server (NTRS)

    Kerr, Wesley; Spears, Diana; Spears, William; Thayer, David

    2004-01-01

    The task addressed here is a dynamic search through a bounded region, while avoiding multiple large obstacles, such as buildings. In the case of limited sensors and communication, maintaining spatial coverage - especially after passing the obstacles - is a challenging problem. Here, we investigate two physics-based approaches to solving this task with multiple simulated mobile robots, one based on artificial forces and the other based on the kinetic theory of gases. The desired behavior is achieved with both methods, and a comparison is made between them. Because both approaches are physics-based, formal assurances about the multi-robot behavior are straightforward, and are included in the paper.

  19. Developing operation algorithms for vision subsystems in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  20. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    PubMed

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  1. Early Obstacle Detection and Avoidance for All to All Traffic Pattern in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Huc, Florian; Jarry, Aubin; Leone, Pierre; Moraru, Luminita; Nikoletseas, Sotiris; Rolim, Jose

    This paper deals with early obstacles recognition in wireless sensor networks under various traffic patterns. In the presence of obstacles, the efficiency of routing algorithms is increased by voluntarily avoiding some regions in the vicinity of obstacles, areas which we call dead-ends. In this paper, we first propose a fast convergent routing algorithm with proactive dead-end detection together with a formal definition and description of dead-ends. Secondly, we present a generalization of this algorithm which improves performances in all to many and all to all traffic patterns. In a third part we prove that this algorithm produces paths that are optimal up to a constant factor of 2π + 1. In a fourth part we consider the reactive version of the algorithm which is an extension of a previously known early obstacle detection algorithm. Finally we give experimental results to illustrate the efficiency of our algorithms in different scenarios.

  2. Obstacle detection by recognizing binary expansion patterns

    NASA Technical Reports Server (NTRS)

    Baram, Yoram; Barniv, Yair

    1993-01-01

    This paper describes a technique for obstacle detection, based on the expansion of the image-plane projection of a textured object, as its distance from the sensor decreases. Information is conveyed by vectors whose components represent first-order temporal and spatial derivatives of the image intensity, which are related to the time to collision through the local divergence. Such vectors may be characterized as patterns corresponding to 'safe' or 'dangerous' situations. We show that essential information is conveyed by single-bit vector components, representing the signs of the relevant derivatives. We use two recently developed, high capacity classifiers, employing neural learning techniques, to recognize the imminence of collision from such patterns.

  3. Railway obstacle detection algorithm using neural network

    NASA Astrophysics Data System (ADS)

    Yu, Mingyang; Yang, Peng; Wei, Sen

    2018-05-01

    Aiming at the difficulty of detection of obstacle in outdoor railway scene, a data-oriented method based on neural network to obtain image objects is proposed. First, we mark objects of images(such as people, trains, animals) acquired on the Internet. and then use the residual learning units to build Fast R-CNN framework. Then, the neural network is trained to get the target image characteristics by using stochastic gradient descent algorithm. Finally, a well-trained model is used to identify an outdoor railway image. if it includes trains and other objects, it will issue an alert. Experiments show that the correct rate of warning reached 94.85%.

  4. Path Planning Method in Multi-obstacle Marine Environment

    NASA Astrophysics Data System (ADS)

    Zhang, Jinpeng; Sun, Hanxv

    2017-12-01

    In this paper, an improved algorithm for particle swarm optimization is proposed for the application of underwater robot in the complex marine environment. Not only did consider to avoid obstacles when path planning, but also considered the current direction and the size effect on the performance of the robot dynamics. The algorithm uses the trunk binary tree structure to construct the path search space and A * heuristic search method is used in the search space to find a evaluation standard path. Then the particle swarm algorithm to optimize the path by adjusting evaluation function, which makes the underwater robot in the current navigation easier to control, and consume less energy.

  5. Detection of obstacles on runway using Ego-Motion compensation and tracking of significant features

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar (Principal Investigator); Camps, Octavia (Principal Investigator); Gandhi, Tarak; Devadiga, Sadashiva

    1996-01-01

    This report describes a method for obstacle detection on a runway for autonomous navigation and landing of an aircraft. Detection is done in the presence of extraneous features such as tiremarks. Suitable features are extracted from the image and warping using approximately known camera and plane parameters is performed in order to compensate ego-motion as far as possible. Residual disparity after warping is estimated using an optical flow algorithm. Features are tracked from frame to frame so as to obtain more reliable estimates of their motion. Corrections are made to motion parameters with the residual disparities using a robust method, and features having large residual disparities are signaled as obstacles. Sensitivity analysis of the procedure is also studied. Nelson's optical flow constraint is proposed to separate moving obstacles from stationary ones. A Bayesian framework is used at every stage so that the confidence in the estimates can be determined.

  6. Maneuverability and mobility in palm-sized legged robots

    NASA Astrophysics Data System (ADS)

    Kohut, Nicholas J.; Birkmeyer, Paul M.; Peterson, Kevin C.; Fearing, Ronald S.

    2012-06-01

    Palm sized legged robots show promise for military and civilian applications, including exploration of hazardous or difficult to reach places, search and rescue, espionage, and battlefield reconnaissance. However, they also face many technical obstacles, including- but not limited to- actuator performance, weight constraints, processing power, and power density. This paper presents an overview of several robots from the Biomimetic Millisystems Laboratory at UC Berkeley, including the OctoRoACH, a steerable, running legged robot capable of basic navigation and equipped with a camera and active tail; CLASH, a dynamic climbing robot; and BOLT, a hybrid crawling and flying robot. The paper also discusses, and presents some preliminary solutions to, the technical obstacles listed above plus issues such as robustness to unstructured environments, limited sensing and communication bandwidths, and system integration.

  7. People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments

    NASA Astrophysics Data System (ADS)

    Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.

    People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.

  8. Multigait soft robot

    PubMed Central

    Shepherd, Robert F.; Ilievski, Filip; Choi, Wonjae; Morin, Stephen A.; Stokes, Adam A.; Mazzeo, Aaron D.; Chen, Xin; Wang, Michael; Whitesides, George M.

    2011-01-01

    This manuscript describes a unique class of locomotive robot: A soft robot, composed exclusively of soft materials (elastomeric polymers), which is inspired by animals (e.g., squid, starfish, worms) that do not have hard internal skeletons. Soft lithography was used to fabricate a pneumatically actuated robot capable of sophisticated locomotion (e.g., fluid movement of limbs and multiple gaits). This robot is quadrupedal; it uses no sensors, only five actuators, and a simple pneumatic valving system that operates at low pressures (< 10 psi). A combination of crawling and undulation gaits allowed this robot to navigate a difficult obstacle. This demonstration illustrates an advantage of soft robotics: They are systems in which simple types of actuation produce complex motion. PMID:22123978

  9. Gait development on Minitaur, a direct drive quadrupedal robot

    NASA Astrophysics Data System (ADS)

    Blackman, Daniel J.; Nicholson, John V.; Ordonez, Camilo; Miller, Bruce D.; Clark, Jonathan E.

    2016-05-01

    This paper describes the development of a dynamic, quadrupedal robot designed for rapid traversal and interaction in human environments. We explore improvements to both physical and control methods to a legged robot (Minitaur) in order to improve the speed and stability of its gaits and increase the range of obstacles that it can overcome, with an eye toward negotiating man-made terrains such as stairs. These modifications include an analysis of physical compliance, an investigation of foot and leg design, and the implementation of ground and obstacle contact sensing for inclusion in the control schemes. Structural and mechanical improvements were made to reduce undesired compliance for more consistent agreement with dynamic models, which necessitated refinement of foot design for greater durability. Contact sensing was implemented into the control scheme for identifying obstacles and deviations in surface level for negotiation of varying terrain. Overall the incorporation of these features greatly enhances the mobility of the dynamic quadrupedal robot and helps to establish a basis for overcoming obstacles.

  10. Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)

    NASA Technical Reports Server (NTRS)

    Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey

    1990-01-01

    Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.

  11. Mobile Robot Navigation and Obstacle Avoidance in Unstructured Outdoor Environments

    DTIC Science & Technology

    2017-12-01

    to pull information from the network, it subscribes to a specific topic and is able to receive the messages that are published to that topic. In order...total artificial potential field is characterized “as the sum of an attractive potential pulling the robot toward the goal…and a repulsive potential...of robot laser_max = 20; % robot laser view horizon goaldist = 0.5; % distance metric for reaching goal goali = 1

  12. Coordinated Control Of Mobile Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1995-01-01

    Computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Applicable to variety of mobile robotic manipulators, including robots that move along tracks (typically, painting and welding robots), robots mounted on gantries and capable of moving in all three dimensions, wheeled robots, and compound robots (consisting of robots mounted on other robots). Theoretical basis discussed in several prior articles in NASA Tech Briefs, including "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes With Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).

  13. Design and Development of Mopping Robot-'HotBot'

    NASA Astrophysics Data System (ADS)

    Khan, M. R.; Huq, N. M. L.; Billah, M. M.; Ahmmad, S. M.

    2013-12-01

    To have a healthy, comfortable, and fresh civilized life we need to do some unhealthy households. Cleaning the dirty floor with a mop is one of the most disgusting and scary house hold jobs. Mopping robots are a solution of such problem. However, these robots are still on the way to be smart enough. Many points limit their efficiency, i.e. cleaning sticky dirt, having dry floor after cleaning, monitoring, cost etc. 'HotBot' is a mopping robot that can clean dirty floor with nice efficiency leaving no sticky dirt. Hot water can be used for heavy stains or normal water for usual situation and economy. It needs neither to be monitored during mopping nor to wipe the floor after it. 'HotBot' has sensors to detect obstacles and a control mechanism to avoid those. Moreover, it cleans sequentially and equipped with several accident-protection-systems. Moreover, it is also cost effective compared to the robots available so far.

  14. Mobile autonomous robotic apparatus for radiologic characterization

    DOEpatents

    Dudar, Aed M.; Ward, Clyde R.; Jones, Joel D.; Mallet, William R.; Harpring, Larry J.; Collins, Montenius X.; Anderson, Erin K.

    1999-01-01

    A mobile robotic system that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console.

  15. Monitoring robot actions for error detection and recovery

    NASA Technical Reports Server (NTRS)

    Gini, M.; Smith, R.

    1987-01-01

    Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.

  16. Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, W.J.; Chun, W.H.

    1990-01-01

    The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less

  17. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    A visitor to the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event helps demonstrate how a NASA rover design enables the rover to climb over obstacles higher than it's own body on Saturday, June 16, 2012 at WPI in Worcester, Mass. The event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  18. An integrated collision prediction and avoidance scheme for mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1993-01-01

    A formulation that makes possible the integration of collision prediction and avoidance stages for mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the distance between the robot and a moving obstacle is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. A feedback control is developed and local asymptotic stability is proved if the velocity of the moving obstacle is bounded. Furthermore, a solution to the problem of inverse dynamics for the mobile robot is given. Simulation results verify the value of the proposed strategy.

  19. Wheeled hopping robot

    DOEpatents

    Fischer, Gary J [Albuquerque, NM

    2010-08-17

    The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.

  20. Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee; Gandhi, Tarak; Hartman, Kerry; Yang, Mau-Tsuen

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design.

  1. Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection.

    PubMed

    Sarikaya, Duygu; Corso, Jason J; Guru, Khurshid A

    2017-07-01

    Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.

  2. A real-time robot arm collision detection system

    NASA Technical Reports Server (NTRS)

    Shaffer, Clifford A.; Herb, Gregory M.

    1990-01-01

    A data structure and update algorithm are presented for a prototype real time collision detection safety system for a multi-robot environment. The data structure is a variant of the octree, which serves as a spatial index. An octree recursively decomposes 3-D space into eight equal cubic octants until each octant meets some decomposition criteria. The octree stores cylspheres (cylinders with spheres on each end) and rectangular solids as primitives (other primitives can easily be added as required). These primitives make up the two seven degrees-of-freedom robot arms and environment modeled by the system. Octree nodes containing more than a predetermined number N of primitives are decomposed. This rule keeps the octree small, as the entire environment for the application can be modeled using a few dozen primitives. As robot arms move, the octree is updated to reflect their changed positions. During most update cycles, any given primitive does not change which octree nodes it is in. Thus, modification to the octree is rarely required. Incidents in which one robot arm comes too close to another arm or an object are reported. Cycle time for interpreting current joint angles, updating the octree, and detecting/reporting imminent collisions averages 30 milliseconds on an Intel 80386 processor running at 20 MHz.

  3. Design of an autonomous exterior security robot

    NASA Technical Reports Server (NTRS)

    Myers, Scott D.

    1994-01-01

    This paper discusses the requirements and preliminary design of robotic vehicle designed for performing autonomous exterior perimeter security patrols around warehouse areas, ammunition supply depots, and industrial parks for the U.S. Department of Defense. The preliminary design allows for the operation of up to eight vehicles in a six kilometer by six kilometer zone with autonomous navigation and obstacle avoidance. In addition to detection of crawling intruders at 100 meters, the system must perform real-time inventory checking and database comparisons using a microwave tags system.

  4. Controlling the autonomy of a reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David

    2004-09-01

    In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.

  5. Visual Detection and Tracking System for a Spherical Amphibious Robot

    PubMed Central

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-01-01

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation. PMID:28420134

  6. Visual Detection and Tracking System for a Spherical Amphibious Robot.

    PubMed

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-04-15

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.

  7. A Survey of Bioinspired Jumping Robot: Takeoff, Air Posture Adjustment, and Landing Buffer

    PubMed Central

    2017-01-01

    A bioinspired jumping robot has a strong ability to overcome obstacles. It can be applied to the occasion with complex and changeable environment, such as detection of planet surface, postdisaster relief, and military reconnaissance. So the bioinspired jumping robot has broad application prospect. The jumping process of the robot can be divided into three stages: takeoff, air posture adjustment, and landing buffer. The motivation of this review is to investigate the research results of the most published bioinspired jumping robots for these three stages. Then, the movement performance of the bioinspired jumping robots is analyzed and compared quantitatively. Then, the limitation of the research on bioinspired jumping robots is discussed, such as the research on the mechanism of biological motion is not thorough enough, the research method about structural design, material applications, and control are still traditional, and energy utilization is low, which make the robots far from practical applications. Finally, the development trend is summarized. This review provides a reference for further research of bioinspired jumping robots. PMID:29311756

  8. Recognition of three dimensional obstacles by an edge detection scheme. [for Mars roving vehicle using laser range finder

    NASA Technical Reports Server (NTRS)

    Reed, M. A.

    1974-01-01

    The need for an obstacle detection system on the Mars roving vehicle was assumed, and a practical scheme was investigated and simulated. The principal sensing device on this vehicle was taken to be a laser range finder. Both existing and original algorithms, ending with thresholding operations, were used to obtain the outlines of obstacles from the raw data of this laser scan. A theoretical analysis was carried out to show how proper value of threshold may be chosen. Computer simulations considered various mid-range boulders, for which the scheme was quite successful. The extension to other types of obstacles, such as craters, was considered. The special problems of bottom edge detection and scanning procedure are discussed.

  9. Mobile autonomous robotic apparatus for radiologic characterization

    DOEpatents

    Dudar, A.M.; Ward, C.R.; Jones, J.D.; Mallet, W.R.; Harpring, L.J.; Collins, M.X.; Anderson, E.K.

    1999-08-10

    A mobile robotic system is described that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console. 4 figs.

  10. Small-scale soft-bodied robot with multimodal locomotion.

    PubMed

    Hu, Wenqi; Lum, Guo Zhan; Mastrangeli, Massimo; Sitti, Metin

    2018-02-01

    Untethered small-scale (from several millimetres down to a few micrometres in all dimensions) robots that can non-invasively access confined, enclosed spaces may enable applications in microfactories such as the construction of tissue scaffolds by robotic assembly, in bioengineering such as single-cell manipulation and biosensing, and in healthcare such as targeted drug delivery and minimally invasive surgery. Existing small-scale robots, however, have very limited mobility because they are unable to negotiate obstacles and changes in texture or material in unstructured environments. Of these small-scale robots, soft robots have greater potential to realize high mobility via multimodal locomotion, because such machines have higher degrees of freedom than their rigid counterparts. Here we demonstrate magneto-elastic soft millimetre-scale robots that can swim inside and on the surface of liquids, climb liquid menisci, roll and walk on solid surfaces, jump over obstacles, and crawl within narrow tunnels. These robots can transit reversibly between different liquid and solid terrains, as well as switch between locomotive modes. They can additionally execute pick-and-place and cargo-release tasks. We also present theoretical models to explain how the robots move. Like the large-scale robots that can be used to study locomotion, these soft small-scale robots could be used to study soft-bodied locomotion produced by small organisms.

  11. Small-scale soft-bodied robot with multimodal locomotion

    NASA Astrophysics Data System (ADS)

    Hu, Wenqi; Lum, Guo Zhan; Mastrangeli, Massimo; Sitti, Metin

    2018-02-01

    Untethered small-scale (from several millimetres down to a few micrometres in all dimensions) robots that can non-invasively access confined, enclosed spaces may enable applications in microfactories such as the construction of tissue scaffolds by robotic assembly, in bioengineering such as single-cell manipulation and biosensing, and in healthcare such as targeted drug delivery and minimally invasive surgery. Existing small-scale robots, however, have very limited mobility because they are unable to negotiate obstacles and changes in texture or material in unstructured environments. Of these small-scale robots, soft robots have greater potential to realize high mobility via multimodal locomotion, because such machines have higher degrees of freedom than their rigid counterparts. Here we demonstrate magneto-elastic soft millimetre-scale robots that can swim inside and on the surface of liquids, climb liquid menisci, roll and walk on solid surfaces, jump over obstacles, and crawl within narrow tunnels. These robots can transit reversibly between different liquid and solid terrains, as well as switch between locomotive modes. They can additionally execute pick-and-place and cargo-release tasks. We also present theoretical models to explain how the robots move. Like the large-scale robots that can be used to study locomotion, these soft small-scale robots could be used to study soft-bodied locomotion produced by small organisms.

  12. Control Algorithms for a Shape-shifting Tracked Robotic Vehicle Climbing Obstacles

    DTIC Science & Technology

    2008-12-01

    robot be- havioural skills. The Swiss Federal Institute of Technology is developing the shape-shifting robotic platform Octopus [6] (Figure l(b...and traverse steep (a) Lurker (b) Octopus (c) NUGV (d) Chaos (e) STRV Figure 1: Shape-shifting robotic vehicles in different research labs. DRDC...situations. The system is assumed stuck when vv?; + v~ + v’i) < 0.01 mls or Vx < O. Only forward movements are considered in this work, for this reason

  13. Time response for sensor sensed to actuator response for mobile robotic system

    NASA Astrophysics Data System (ADS)

    Amir, N. S.; Shafie, A. A.

    2017-11-01

    Time and performance of a mobile robot are very important in completing the tasks given to achieve its ultimate goal. Tasks may need to be done within a time constraint to ensure smooth operation of a mobile robot and can result in better performance. The main purpose of this research was to improve the performance of a mobile robot so that it can complete the tasks given within time constraint. The problem that is needed to be solved is to minimize the time interval between sensor detection and actuator response. The research objective is to analyse the real time operating system performance of sensors and actuators on one microcontroller and on two microcontroller for a mobile robot. The task for a mobile robot for this research is line following with an obstacle avoidance. Three runs will be carried out for the task and the time between the sensors senses to the actuator responses were recorded. Overall, the results show that two microcontroller system have better response time compared to the one microcontroller system. For this research, the average difference of response time is very important to improve the internal performance between the occurrence of a task, sensors detection, decision making and actuator response of a mobile robot. This research helped to develop a mobile robot with a better performance and can complete task within the time constraint.

  14. Convolutional Neural Network-Based Embarrassing Situation Detection under Camera for Social Robot in Smart Homes

    PubMed Central

    Sheng, Weihua; Junior, Francisco Erivaldo Fernandes; Li, Shaobo

    2018-01-01

    Recent research has shown that the ubiquitous use of cameras and voice monitoring equipment in a home environment can raise privacy concerns and affect human mental health. This can be a major obstacle to the deployment of smart home systems for elderly or disabled care. This study uses a social robot to detect embarrassing situations. Firstly, we designed an improved neural network structure based on the You Only Look Once (YOLO) model to obtain feature information. By focusing on reducing area redundancy and computation time, we proposed a bounding-box merging algorithm based on region proposal networks (B-RPN), to merge the areas that have similar features and determine the borders of the bounding box. Thereafter, we designed a feature extraction algorithm based on our improved YOLO and B-RPN, called F-YOLO, for our training datasets, and then proposed a real-time object detection algorithm based on F-YOLO (RODA-FY). We implemented RODA-FY and compared models on our MAT social robot. Secondly, we considered six types of situations in smart homes, and developed training and validation datasets, containing 2580 and 360 images, respectively. Meanwhile, we designed three types of experiments with four types of test datasets composed of 960 sample images. Thirdly, we analyzed how a different number of training iterations affects our prediction estimation, and then we explored the relationship between recognition accuracy and learning rates. Our results show that our proposed privacy detection system can recognize designed situations in the smart home with an acceptable recognition accuracy of 94.48%. Finally, we compared the results among RODA-FY, Inception V3, and YOLO, which indicate that our proposed RODA-FY outperforms the other comparison models in recognition accuracy. PMID:29757211

  15. Convolutional Neural Network-Based Embarrassing Situation Detection under Camera for Social Robot in Smart Homes.

    PubMed

    Yang, Guanci; Yang, Jing; Sheng, Weihua; Junior, Francisco Erivaldo Fernandes; Li, Shaobo

    2018-05-12

    Recent research has shown that the ubiquitous use of cameras and voice monitoring equipment in a home environment can raise privacy concerns and affect human mental health. This can be a major obstacle to the deployment of smart home systems for elderly or disabled care. This study uses a social robot to detect embarrassing situations. Firstly, we designed an improved neural network structure based on the You Only Look Once (YOLO) model to obtain feature information. By focusing on reducing area redundancy and computation time, we proposed a bounding-box merging algorithm based on region proposal networks (B-RPN), to merge the areas that have similar features and determine the borders of the bounding box. Thereafter, we designed a feature extraction algorithm based on our improved YOLO and B-RPN, called F-YOLO, for our training datasets, and then proposed a real-time object detection algorithm based on F-YOLO (RODA-FY). We implemented RODA-FY and compared models on our MAT social robot. Secondly, we considered six types of situations in smart homes, and developed training and validation datasets, containing 2580 and 360 images, respectively. Meanwhile, we designed three types of experiments with four types of test datasets composed of 960 sample images. Thirdly, we analyzed how a different number of training iterations affects our prediction estimation, and then we explored the relationship between recognition accuracy and learning rates. Our results show that our proposed privacy detection system can recognize designed situations in the smart home with an acceptable recognition accuracy of 94.48%. Finally, we compared the results among RODA-FY, Inception V3, and YOLO, which indicate that our proposed RODA-FY outperforms the other comparison models in recognition accuracy.

  16. Object classification for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Regensburger, Uwe; Graefe, Volker

    1991-03-01

    Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.

  17. Path planning for robotic truss assembly

    NASA Technical Reports Server (NTRS)

    Sanderson, Arthur C.

    1993-01-01

    A new Potential Fields approach to the robotic path planning problem is proposed and implemented. Our approach, which is based on one originally proposed by Munger, computes an incremental joint vector based upon attraction to a goal and repulsion from obstacles. By repetitively adding and computing these 'steps', it is hoped (but not guaranteed) that the robot will reach its goal. An attractive force exerted by the goal is found by solving for the the minimum norm solution to the linear Jacobian equation. A repulsive force between obstacles and the robot's links is used to avoid collisions. Its magnitude is inversely proportional to the distance. Together, these forces make the goal the global minimum potential point, but local minima can stop the robot from ever reaching that point. Our approach improves on a basic, potential field paradigm developed by Munger by using an active, adaptive field - what we will call a 'flexible' potential field. Active fields are stronger when objects move towards one another and weaker when they move apart. An adaptive field's strength is individually tailored to be just strong enough to avoid any collision. In addition to the local planner, a global planning algorithm helps the planner to avoid local field minima by providing subgoals. These subgoals are based on the obstacles which caused the local planner to fail. A best-first search algorithm A* is used for graph search.

  18. Autonomous mobile robot teams

    NASA Technical Reports Server (NTRS)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  19. Interactive-rate Motion Planning for Concentric Tube Robots.

    PubMed

    Torres, Luis G; Baykal, Cenk; Alterovitz, Ron

    2014-05-01

    Concentric tube robots may enable new, safer minimally invasive surgical procedures by moving along curved paths to reach difficult-to-reach sites in a patient's anatomy. Operating these devices is challenging due to their complex, unintuitive kinematics and the need to avoid sensitive structures in the anatomy. In this paper, we present a motion planning method that computes collision-free motion plans for concentric tube robots at interactive rates. Our method's high speed enables a user to continuously and freely move the robot's tip while the motion planner ensures that the robot's shaft does not collide with any anatomical obstacles. Our approach uses a highly accurate mechanical model of tube interactions, which is important since small movements of the tip position may require large changes in the shape of the device's shaft. Our motion planner achieves its high speed and accuracy by combining offline precomputation of a collision-free roadmap with online position control. We demonstrate our interactive planner in a simulated neurosurgical scenario where a user guides the robot's tip through the environment while the robot automatically avoids collisions with the anatomical obstacles.

  20. Control Of A Serpentine Robot For Inspection Tasks

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Colbaugh, Richard D.; Glass, Kristin L.

    1996-01-01

    Efficient, robust kinematic control scheme developed to control serpentine robot designed to inspect complex structure. Takes full advantage of multiple redundant degrees of freedom of robot to provide considerable dexterity for maneuvering through workspace cluttered with stationary obstacles at initially unknown positions. Control scheme produces slithering motion.

  1. Vision-based obstacle avoidance

    DOEpatents

    Galbraith, John [Los Alamos, NM

    2006-07-18

    A method for allowing a robot to avoid objects along a programmed path: first, a field of view for an electronic imager of the robot is established along a path where the electronic imager obtains the object location information within the field of view; second, a population coded control signal is then derived from the object location information and is transmitted to the robot; finally, the robot then responds to the control signal and avoids the detected object.

  2. Imparting protean behavior to mobile robots accomplishing patrolling tasks in the presence of adversaries.

    PubMed

    Curiac, Daniel-Ioan; Volosencu, Constantin

    2015-10-08

    Providing unpredictable trajectories for patrol robots is essential when coping with adversaries. In order to solve this problem we developed an effective approach based on the known protean behavior of individual prey animals-random zig-zag movement. The proposed bio-inspired method modifies the normal robot's path by incorporating sudden and irregular direction changes without jeopardizing the robot's mission. Such a tactic is aimed to confuse the enemy (e.g. a sniper), offering less time to acquire and retain sight alignment and sight picture. This idea is implemented by simulating a series of fictive-temporary obstacles that will randomly appear in the robot's field of view, deceiving the obstacle avoiding mechanism to react. The new general methodology is particularized by using the Arnold's cat map to obtain the timely random appearance and disappearance of the fictive obstacles. The viability of the proposed method is confirmed through an extensive simulation case study.

  3. Vision based obstacle detection and grouping for helicopter guidance

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Chatterji, Gano

    1993-01-01

    Electro-optical sensors can be used to compute range to objects in the flight path of a helicopter. The computation is based on the optical flow/motion at different points in the image. The motion algorithms provide a sparse set of ranges to discrete features in the image sequence as a function of azimuth and elevation. For obstacle avoidance guidance and display purposes, these discrete set of ranges, varying from a few hundreds to several thousands, need to be grouped into sets which correspond to objects in the real world. This paper presents a new method for object segmentation based on clustering the sparse range information provided by motion algorithms together with the spatial relation provided by the static image. The range values are initially grouped into clusters based on depth. Subsequently, the clusters are modified by using the K-means algorithm in the inertial horizontal plane and the minimum spanning tree algorithms in the image plane. The object grouping allows interpolation within a group and enables the creation of dense range maps. Researchers in robotics have used densely scanned sequence of laser range images to build three-dimensional representation of the outside world. Thus, modeling techniques developed for dense range images can be extended to sparse range images. The paper presents object segmentation results for a sequence of flight images.

  4. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning.

    PubMed

    Baykal, Cenk; Torres, Luis G; Alterovitz, Ron

    2015-09-28

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.

  5. A Search-and-Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment

    PubMed Central

    Gao, Junyao; Zhao, Fangzhou; Liu, Yi

    2017-01-01

    This paper introduces a search-and-rescue robot system used for remote sensing of the underground coal mine environment, which is composed of an operating control unit and two mobile robots with explosion-proof and waterproof function. This robot system is designed to observe and collect information of the coal mine environment through remote control. Thus, this system can be regarded as a multifunction sensor, which realizes remote sensing. When the robot system detects danger, it will send out signals to warn rescuers to keep away. The robot consists of two gas sensors, two cameras, a two-way audio, a 1 km-long fiber-optic cable for communication and a mechanical explosion-proof manipulator. Especially, the manipulator is a novel explosion-proof manipulator for cleaning obstacles, which has 3-degree-of-freedom, but is driven by two motors. Furthermore, the two robots can communicate in series for 2 km with the operating control unit. The development of the robot system may provide a reference for developing future search-and-rescue systems. PMID:29065560

  6. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  7. Dynamic multisensor fusion for mobile robot navigation in an indoor environment

    NASA Astrophysics Data System (ADS)

    Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.

    2001-10-01

    In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.

  8. Tank-automotive robotics

    NASA Astrophysics Data System (ADS)

    Lane, Gerald R.

    1999-07-01

    To provide an overview of Tank-Automotive Robotics. The briefing will contain program overviews & inter-relationships and technology challenges of TARDEC managed unmanned and robotic ground vehicle programs. Specific emphasis will focus on technology developments/approaches to achieve semi- autonomous operation and inherent chassis mobility features. Programs to be discussed include: DemoIII Experimental Unmanned Vehicle (XUV), Tactical Mobile Robotics (TMR), Intelligent Mobility, Commanders Driver Testbed, Collision Avoidance, International Ground Robotics Competition (ICGRC). Specifically, the paper will discuss unique exterior/outdoor challenges facing the IGRC competing teams and the synergy created between the IGRC and ongoing DoD semi-autonomous Unmanned Ground Vehicle and DoT Intelligent Transportation System programs. Sensor and chassis approaches to meet the IGRC challenges and obstacles will be shown and discussed. Shortfalls in performance to meet the IGRC challenges will be identified.

  9. Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation

    PubMed Central

    Masmoudi, Mohamed Slim; Masmoudi, Mohamed

    2016-01-01

    This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path. PMID:27688748

  10. Mobile Robot Designed with Autonomous Navigation System

    NASA Astrophysics Data System (ADS)

    An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin

    2017-10-01

    With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.

  11. The Design of Artificial Intelligence Robot Based on Fuzzy Logic Controller Algorithm

    NASA Astrophysics Data System (ADS)

    Zuhrie, M. S.; Munoto; Hariadi, E.; Muslim, S.

    2018-04-01

    Artificial Intelligence Robot is a wheeled robot driven by a DC motor that moves along the wall using an ultrasonic sensor as a detector of obstacles. This study uses ultrasonic sensors HC-SR04 to measure the distance between the robot with the wall based ultrasonic wave. This robot uses Fuzzy Logic Controller to adjust the speed of DC motor. When the ultrasonic sensor detects a certain distance, sensor data is processed on ATmega8 then the data goes to ATmega16. From ATmega16, sensor data is calculated based on Fuzzy rules to drive DC motor speed. The program used to adjust the speed of a DC motor is CVAVR program (Code Vision AVR). The readable distance of ultrasonic sensor is 3 cm to 250 cm with response time 0.5 s. Testing of robots on walls with a setpoint value of 9 cm to 10 cm produce an average error value of -12% on the wall of L, -8% on T walls, -8% on U wall, and -1% in square wall.

  12. Integrated mobile-robot design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kortenkamp, D.; Huber, M.; Cohen, C.

    1993-08-01

    Ten mobile robots entered the AAAI '92 Robot Competition, held at last year's national conference. Carmel, the University of Michigan entry, won. The competition consisted of three stages. The first stage required roaming a 22[times]22-meter arena while avoiding static and dynamic obstacles; the second involved searching for and visiting 10 objects in the same arena. The obstacles were at least 1.5 meters apart, while the objects were spaced roughly evenly throughout the arena. Visiting was defined as moving to within two robot diameters of the object. The last stage was a timed race to visit three of the objects locatedmore » earlier and return home. Since the first stage was primarily a subset of the second-stage requirements, and the third-stage implementation was very similar to that of the second, the authors' focus here on the second stage. Carmel (Computer-Aided Robotics for Maintenance, Emergency, and Life support) is based on a commercially available Cybermotion K2A mobile-robot platform. It has a top speed of approximately 800 millimeters per second and moves on three synchronously driven wheels. For sensing, Carmel, has a ring of 24 Polaroid sonar sensors and a single black-and-white charge-coupled-device camera mounted on a rotating table. Carmel has three processors: one controls the drive motors, one fires the sonar ring, and the third, a 486-based PC clone, executes all the high-level modules. The 486 also has a frame grabber for acquiring images. All computation and power are contained on-board.« less

  13. Enhancing patient freedom in rehabilitation robotics using gaze-based intention detection.

    PubMed

    Novak, Domen; Riener, Robert

    2013-06-01

    Several design strategies for rehabilitation robotics have aimed to improve patients' experiences using motivating and engaging virtual environments. This paper presents a new design strategy: enhancing patient freedom with a complex virtual environment that intelligently detects patients' intentions and supports the intended actions. A 'virtual kitchen' scenario has been developed in which many possible actions can be performed at any time, allowing patients to experiment and giving them more freedom. Remote eye tracking is used to detect the intended action and trigger appropriate support by a rehabilitation robot. This approach requires no additional equipment attached to the patient and has a calibration time of less than a minute. The system was tested on healthy subjects using the ARMin III arm rehabilitation robot. It was found to be technically feasible and usable by healthy subjects. However, the intention detection algorithm should be improved using better sensor fusion, and clinical tests with patients are needed to evaluate the system's usability and potential therapeutic benefits.

  14. Multi-hop path tracing of mobile robot with multi-range image

    NASA Astrophysics Data System (ADS)

    Choudhury, Ramakanta; Samal, Chandrakanta; Choudhury, Umakanta

    2010-02-01

    It is well known that image processing depends heavily upon image representation technique . This paper intends to find out the optimal path of mobile robots for a specified area where obstacles are predefined as well as modified. Here the optimal path is represented by using the Quad tree method. Since there has been rising interest in the use of quad tree, we have tried to use the successive subdivision of images into quadrants from which the quad tree is developed. In the quad tree, obstacles-free area and the partial filled area are represented with different notations. After development of quad tree the algorithm is used to find the optimal path by employing neighbor finding technique, with a view to move the robot from the source to destination. The algorithm, here , permeates through the entire tree, and tries to locate the common ancestor for computation. The computation and the algorithm, aim at easing the ability of the robot to trace the optimal path with the help of adjacencies between the neighboring nodes as well as determining such adjacencies in the horizontal, vertical and diagonal directions. In this paper efforts have been made to determine the movement of the adjacent block in the quad tree and to detect the transition between the blocks equal size and finally generate the result.

  15. Advanced Augmented White Cane with obstacle height and distance feedback.

    PubMed

    Pyun, Rosali; Kim, Yeongmi; Wespe, Pascal; Gassert, Roger; Schneller, Stefan

    2013-06-01

    The white cane is a widely used mobility aid that helps visually impaired people navigate the surroundings. While it reliably and intuitively extends the detection range of ground-level obstacles and drop-offs to about 1.2 m, it lacks the ability to detect trunk and head-level obstacles. Electronic Travel Aids (ETAs) have been proposed to overcome these limitations, but have found minimal adoption due to limitations such as low information content and low reliability thereof. Although existing ETAs extend the sensing range beyond that of the conventional white cane, most of them do not detect head-level obstacles and drop-offs, nor can they identify the vertical extent of obstacles. Furthermore, some ETAs work independent of the white cane, and thus reliable detection of surface textures and drop-offs is not provided. This paper introduces a novel ETA, the Advanced Augmented White Cane, which detects obstacles at four vertical levels and provides multi-sensory feedback. We evaluated the device in five blindfolded subjects through reaction time measurements following the detection of an obstacle, as well as through the reliability of dropoff detection. The results showed that our aid could help the user successfully detect an obstacle and identify its height, with an average reaction time of 410 msec. Drop-offs were reliably detected with an intraclass correlation > 0.95. This work is a first step towards a low-cost ETA to complement the functionality of the conventional white cane.

  16. Improved Collision-Detection Method for Robotic Manipulator

    NASA Technical Reports Server (NTRS)

    Leger, Chris

    2003-01-01

    An improved method has been devised for the computational prediction of a collision between (1) a robotic manipulator and (2) another part of the robot or an external object in the vicinity of the robot. The method is intended to be used to test commanded manipulator trajectories in advance so that execution of the commands can be stopped before damage is done. The method involves utilization of both (1) mathematical models of the robot and its environment constructed manually prior to operation and (2) similar models constructed automatically from sensory data acquired during operation. The representation of objects in this method is simpler and more efficient (with respect to both computation time and computer memory), relative to the representations used in most prior methods. The present method was developed especially for use on a robotic land vehicle (rover) equipped with a manipulator arm and a vision system that includes stereoscopic electronic cameras. In this method, objects are represented and collisions detected by use of a previously developed technique known in the art as the method of oriented bounding boxes (OBBs). As the name of this technique indicates, an object is represented approximately, for computational purposes, by a box that encloses its outer boundary. Because many parts of a robotic manipulator are cylindrical, the OBB method has been extended in this method to enable the approximate representation of cylindrical parts by use of octagonal or other multiple-OBB assemblies denoted oriented bounding prisms (OBPs), as in the example of Figure 1. Unlike prior methods, the OBB/OBP method does not require any divisions or transcendental functions; this feature leads to greater robustness and numerical accuracy. The OBB/OBP method was selected for incorporation into the present method because it offers the best compromise between accuracy on the one hand and computational efficiency (and thus computational speed) on the other hand.

  17. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  18. Path planning for mobile robot using the novel repulsive force algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Siyue; Yin, Guoqiang; Li, Xueping

    2018-01-01

    A new type of repulsive force algorithm is proposed to solve the problem of local minimum and the target unreachable of the classic Artificial Potential Field (APF) method in this paper. The Gaussian function that is related to the distance between the robot and the target is added to the traditional repulsive force, solving the problem of the goal unreachable with the obstacle nearby; variable coefficient is added to the repulsive force component to resize the repulsive force, which can solve the local minimum problem when the robot, the obstacle and the target point are in the same line. The effectiveness of the algorithm is verified by simulation based on MATLAB and actual mobile robot platform.

  19. Learning robotic eye-arm-hand coordination from human demonstration: a coupled dynamical systems approach.

    PubMed

    Lukic, Luka; Santos-Victor, José; Billard, Aude

    2014-04-01

    We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.

  20. Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.

    PubMed

    Deng, Fucheng; Zhu, Xiaorui; He, Chao

    2017-09-13

    Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.

  1. PSD Camera Based Position and Posture Control of Redundant Robot Considering Contact Motion

    NASA Astrophysics Data System (ADS)

    Oda, Naoki; Kotani, Kentaro

    The paper describes a position and posture controller design based on the absolute position by external PSD vision sensor for redundant robot manipulator. The redundancy enables a potential capability to avoid obstacle while continuing given end-effector jobs under contact with middle link of manipulator. Under contact motion, the deformation due to joint torsion obtained by comparing internal and external position sensor, is actively suppressed by internal/external position hybrid controller. The selection matrix of hybrid loop is given by the function of the deformation. And the detected deformation is also utilized in the compliant motion controller for passive obstacle avoidance. The validity of the proposed method is verified by several experimental results of 3link planar redundant manipulator.

  2. Training in urological robotic surgery. Future perspectives.

    PubMed

    El Sherbiny, Ahmed; Eissa, Ahmed; Ghaith, Ahmed; Morini, Elena; Marzotta, Lucilla; Sighinolfi, Maria Chiara; Micali, Salvatore; Bianchi, Giampaolo; Rocco, Bernardo

    2018-01-01

    As robotics are becoming more integrated into the medical field, robotic training is becoming more crucial in order to overcome the lack of experienced robotic surgeons. However, there are several obstacles facing the development of robotic training programs like the high cost of training and the increased operative time during the initial period of the learning curve, which, in turn increase the operative cost. Robotic-assisted laparoscopic prostatectomy is the most commonly performed robotic surgery. Moreover, robotic surgery is becoming more popular among urologic oncologists and pediatric urologists. The need for a standardized and validated robotic training curriculum was growing along with the increased number of urologic centers and institutes adopting the robotic technology. Robotic training includes proctorship, mentorship or fellowship, telementoring, simulators and video training. In this chapter, we are going to discuss the different training methods, how to evaluate robotic skills, the available robotic training curriculum, and the future perspectives.

  3. A Mobile Robot Sonar System with Obstacle Avoidance.

    DTIC Science & Technology

    1994-03-01

    WITH OBSTACLE - AVOIDANCE __ by __ Patrick Gerard Byrne March 1994 Thesis Advisor : Yutaka Kanayama Approved for public release; distribution is...point p is on a line L whose normal has an orientation a and whose distance from the origin is r (Figure 5). This method has an advantage in expressing...sonar(FRONTR); Wine(&pl); while(hitl I >’- 100.0 11 hitl 1 - 0.0 ){ hitl I = sonar(FRONTR); I skipO; line(&p3); gat- robO (&posit 1); while(positl.x

  4. Dual stage potential field method for robotic path planning

    NASA Astrophysics Data System (ADS)

    Singh, Pradyumna Kumar; Parida, Pramod Kumar

    2018-04-01

    Path planning for autonomous mobile robots are the root for all autonomous mobile systems. Various methods are used for optimization of path to be followed by the autonomous mobile robots. Artificial potential field based path planning method is one of the most used methods for the researchers. Various algorithms have been proposed using the potential field approach. But in most of the common problems are encounters while heading towards the goal or target. i.e. local minima problem, zero potential regions problem, complex shaped obstacles problem, target near obstacle problem. In this paper we provide a new algorithm in which two types of potential functions are used one after another. The former one is to use to get the probable points and later one for getting the optimum path. In this algorithm we consider only the static obstacle and goal.

  5. A Concept of the Differentially Driven Three Wheeled Robot

    NASA Astrophysics Data System (ADS)

    Kelemen, M.; Colville, D. J.; Kelemenová, T.; Virgala, I.; Miková, L.

    2013-08-01

    The paper deals with the concept of a differentially driven three wheeled robot. The main task for the robot is to follow the navigation black line on white ground. The robot also contains anti-collision sensors for avoiding obstacles on track. Students learn how to deal with signals from sensors and how to control DC motors. Students work with the controller and develop the locomotion algorithm and can attend a competition

  6. A sub-target approach to the kinodynamic motion control of a wheeled mobile robot

    NASA Astrophysics Data System (ADS)

    Motonaka, Kimiko; Watanabe, Keigo; Maeyama, Shoichi

    2018-02-01

    A mobile robot with two independently driven wheels is popular, but it is difficult to stabilize it by a continuous controller with a constant gain, due to its nonholonomic property. It is guaranteed that a nonholonomic controlled object can always be converged to an arbitrary point using a switching control method or a quasi-continuous control method based on an invariant manifold in a chained form. From this, the authors already proposed a kinodynamic controller to converge the states of such a two-wheeled mobile robot to the arbitrary target position while avoiding obstacles, by combining the control based on the invariant manifold and the harmonic potential field (HPF). On the other hand, it was confirmed in the previous research that there is a case that the robot cannot avoid the obstacle because there is no enough space to converge the current state to the target state. In this paper, we propose a method that divides the final target position into some sub-target positions and moves the robot step by step, and it is confirmed by the simulation that the robot can converge to the target position while avoiding obstacles using the proposed method.

  7. Perception for mobile robot navigation: A survey of the state of the art

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David

    1994-01-01

    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.

  8. Learning for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A

  9. Toward humanoid robots for operations in complex urban environments

    NASA Astrophysics Data System (ADS)

    Pratt, Jerry E.; Neuhaus, Peter; Johnson, Matthew; Carff, John; Krupp, Ben

    2010-04-01

    Many infantry operations in urban environments, such as building clearing, are extremely dangerous and difficult and often result in high casualty rates. Despite the fast pace of technological progress in many other areas, the tactics and technology deployed for many of these dangerous urban operation have not changed much in the last 50 years. While robots have been extremely useful for improvised explosive device (IED) detonation, under-vehicle inspection, surveillance, and cave exploration, there is still no fieldable robot that can operate effectively in cluttered streets and inside buildings. Developing a fieldable robot that can maneuver in complex urban environments is challenging due to narrow corridors, stairs, rubble, doors and cluttered doorways, and other obstacles. Typical wheeled and tracked robots have trouble getting through most of these obstacles. A bipedal humanoid is ideally shaped for many of these obstacles because its legs are long and skinny. Therefore it has the potential to step over large barriers, gaps, rocks, and steps, yet squeeze through narrow passageways, and through narrow doorways. By being able to walk with one foot directly in front of the other, humanoids also have the potential to walk over narrow "balance beam" style objects and can cross a narrow row of stepping stones. We describe some recent advances in humanoid robots, particularly recovery from disturbances, such as pushes and walking over rough terrain. Our disturbance recovery algorithms are based on the concept of Capture Points. An N-Step Capture Point is a point on the ground in which a legged robot can step to in order to stop in N steps. The N-Step Capture Region is the set of all N-Step Capture Points. In order to walk without falling, a legged robot must step somewhere in the intersection between an N-Step Capture Region and the available footholds on the ground. We present results of push recovery using Capture Points on our humanoid robot M2V2.

  10. A neuro-collision avoidance strategy for robot manipulators

    NASA Technical Reports Server (NTRS)

    Onema, Joel P.; Maclaunchlan, Robert A.

    1992-01-01

    The area of collision avoidance and path planning in robotics has received much attention in the research community. Our study centers on a combination of an artificial neural network paradigm with a motion planning strategy that insures safe motion of the Articulated Two-Link Arm with Scissor Hand System relative to an object. Whenever an obstacle is encountered, the arm attempts to slide along the obstacle surface, thereby avoiding collision by means of the local tangent strategy and its artificial neural network implementation. This combination compensates the inverse kinematics of a robot manipulator. Simulation results indicate that a neuro-collision avoidance strategy can be achieved by means of a learning local tangent method.

  11. Study on the intelligent decision making of soccer robot side-wall behavior

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaochuan; Shao, Guifang; Tan, Zhi; Li, Zushu

    2007-12-01

    Side-wall is the static obstacle in soccer robot game, reasonably making use of the Side-wall can improve soccer robot competitive ability. As a kind of artificial life, the Side-wall processing strategy of soccer robot is influenced by many factors, such as game state, field region, attacking and defending situation and so on, each factor also has different influence degree, so, the Side-wall behavior selection is an intelligent selecting process. From the view point of human simulated, based on the idea of Side-wall processing priority[1], this paper builds the priority function for Side-wall processing, constructs the action predicative model for Side-wall obstacle, puts forward the Side-wall processing strategy, and forms the Side-wall behavior selection mechanism. Through the contrasting experiment between the strategy applied and none, proves that this strategy can improve the soccer robot capacity, it is feasible and effective, and has positive meaning for soccer robot stepped study.

  12. Extensibility in local sensor based planning for hyper-redundant manipulators (robot snakes)

    NASA Technical Reports Server (NTRS)

    Choset, Howie; Burdick, Joel

    1994-01-01

    Partial Shape Modification (PSM) is a local sensor feedback method used for hyper-redundant robot manipulators, in which the redundancy is very large or infinite such as that of a robot snake. This aspect of redundancy enables local obstacle avoidance and end-effector placement in real time. Due to the large number of joints or actuators in a hyper-redundant manipulator, small displacement errors of such easily accumulate to large errors in the position of the tip relative to the base. The accuracy could be improved by a local sensor based planning method in which sensors are distributed along the length of the hyper-redundant robot. This paper extends the local sensor based planning strategy beyond the limitations of the fixed length of such a manipulator when its joint limits are met. This is achieved with an algorithm where the length of the deforming part of the robot is variable. Thus , the robot's local avoidance of obstacles is improved through the enhancement of its extensibility.

  13. Dynamic traversal of high bumps and large gaps by a small legged robot

    NASA Astrophysics Data System (ADS)

    Gart, Sean; Winey, Nastasia; de La Tijera Obert, Rafael; Li, Chen

    Small animals encounter and negotiate diverse obstacles comparable in size or larger than themselves. In recent experiments, we found that cockroaches can dynamically traverse bumps up to 4 times hip height and gaps up to 1 body length. To better understand the physics that governs these locomotor transitions, we studied a small six-legged robot negotiating high bumps and large gaps and compared it to animal observations. We found that the robot was able to traverse bumps as large as 1 hip height and gaps as wide as 0.5 body length. For the bump, the robot often climbed over to traverse when initial body yaw was small, but was often deflected laterally and failed to traverse when initial body yaw was large. A simple locomotion energy landscape model explained these observations. For the gap, traversal probability decreased with gap width, which was well explained by a simple Lagrangian model of a forward-moving rigid body falling over the gap edge. For both the bump and the gap, animal performance far exceeded that of the robot, likely due to their relatively higher running speeds and larger rotational oscillations prior to and during obstacle traversal. Differences between animal and robot obstacle negotiation behaviors revealed that animals used active strategies to overcome potential energy barriers.

  14. Continuous Shape Estimation of Continuum Robots Using X-ray Images.

    PubMed

    Lobaton, Edgar J; Fu, Jinghua; Torres, Luis G; Alterovitz, Ron

    2013-05-06

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot's shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints.

  15. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-12-26

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  16. Ultrasonic Array for Obstacle Detection Based on CDMA with Kasami Codes

    PubMed Central

    Diego, Cristina; Hernández, Álvaro; Jiménez, Ana; Álvarez, Fernando J.; Sanz, Rebeca; Aparicio, Joaquín

    2011-01-01

    This paper raises the design of an ultrasonic array for obstacle detection based on Phased Array (PA) techniques, which steers the acoustic beam through the environment by electronics rather than mechanical means. The transmission of every element in the array has been encoded, according to Code Division for Multiple Access (CDMA), which allows multiple beams to be transmitted simultaneously. All these features together enable a parallel scanning system which does not only improve the image rate but also achieves longer inspection distances in comparison with conventional PA techniques. PMID:22247675

  17. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  18. Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits

    NASA Astrophysics Data System (ADS)

    Snider, Ross K.; Arathorn, David W.

    2006-05-01

    A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.

  19. Control of autonomous robot using neural networks

    NASA Astrophysics Data System (ADS)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  20. JPRS Report, Science & Technology, Japan, 4th Intelligent Robots Symposium, Volume 2

    DTIC Science & Technology

    1989-03-16

    accidents caused by strikes by robots,5 a quantitative model for safety evaluation,6 and evaluations of actual systems7 in order to contribute to...Mobile Robot Position Referencing Using Map-Based Vision Systems.... 160 Safety Evaluation of Man-Robot System 171 Fuzzy Path Pattern of Automatic...camera are made after the robot stops to prevent damage from occurring through obstacle interference. The position of the camera is indicated on the

  1. Small, Lightweight Inspection Robot With 12 Degrees Of Freedom

    NASA Technical Reports Server (NTRS)

    Lee, Thomas S.; Ohm, Timothy R.; Hayati, Samad

    1996-01-01

    Small serpentine robot weighs only 6 lbs. and has link diameter of 1.5 in. Designed to perform inspections. Multiple degrees of freedom enables it to reach around obstacles and through small openings into simple or complexly shaped confined spaces to positions where difficult or impossible to perform inspections by other means. Fiber-optic borescope incorporated into robot arm, with inspection tip of borescope located at tip of arm. Borescope both conveys light along robot arm to illuminate scene inspected at tip and conveys image of scene back along robot arm to external imaging equipment.

  2. Model Predictive Control considering Reachable Range of Wheels for Leg / Wheel Mobile Robots

    NASA Astrophysics Data System (ADS)

    Suzuki, Naito; Nonaka, Kenichiro; Sekiguchi, Kazuma

    2016-09-01

    Obstacle avoidance is one of the important tasks for mobile robots. In this paper, we study obstacle avoidance control for mobile robots equipped with four legs comprised of three DoF SCARA leg/wheel mechanism, which enables the robot to change its shape adapting to environments. Our previous method achieves obstacle avoidance by model predictive control (MPC) considering obstacle size and lateral wheel positions. However, this method does not ensure existence of joint angles which achieves reference wheel positions calculated by MPC. In this study, we propose a model predictive control considering reachable mobile ranges of wheels positions by combining multiple linear constraints, where each reachable mobile range is approximated as a convex trapezoid. Thus, we achieve to formulate a MPC as a quadratic problem with linear constraints for nonlinear problem of longitudinal and lateral wheel position control. By optimization of MPC, the reference wheel positions are calculated, while each joint angle is determined by inverse kinematics. Considering reachable mobile ranges explicitly, the optimal joint angles are calculated, which enables wheels to reach the reference wheel positions. We verify its advantages by comparing the proposed method with the previous method through numerical simulations.

  3. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    NASA Astrophysics Data System (ADS)

    Hanford, Scott D.

    Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the

  4. Anticipatory detection of turning in humans for intuitive control of robotic mobility assistance.

    PubMed

    Farkhatdinov, Ildar; Roehri, Nicolas; Burdet, Etienne

    2017-09-26

    Many wearable lower-limb robots for walking assistance have been developed in recent years. However, it remains unclear how they can be commanded in an intuitive and efficient way by their user. In particular, providing robotic assistance to neurologically impaired individuals in turning remains a significant challenge. The control should be safe to the users and their environment, yet yield sufficient performance and enable natural human-machine interaction. Here, we propose using the head and trunk anticipatory behaviour in order to detect the intention to turn in a natural, non-intrusive way, and use it for triggering turning movement in a robot for walking assistance. We therefore study head and trunk orientation during locomotion of healthy adults, and investigate upper body anticipatory behaviour during turning. The collected walking and turning kinematics data are clustered using the k-means algorithm and cross-validation tests and k-nearest neighbours method are used to evaluate the performance of turning detection during locomotion. Tests with seven subjects exhibited accurate turning detection. Head anticipated turning by more than 400-500 ms in average across all subjects. Overall, the proposed method detected turning 300 ms after its initiation and 1230 ms before the turning movement was completed. Using head anticipatory behaviour enabled to detect turning faster by about 100 ms, compared to turning detection using only pelvis orientation measurements. Finally, it was demonstrated that the proposed turning detection can improve the quality of human-robot interaction by improving the control accuracy and transparency.

  5. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models

    PubMed Central

    2018-01-01

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the ‘Internet of Things’ (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds. PMID:29748521

  6. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models.

    PubMed

    Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E

    2018-05-10

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.

  7. Human-like robots for space and hazardous environments

    NASA Technical Reports Server (NTRS)

    Cogley, Allen; Gustafson, David; White, Warren; Dyer, Ruth; Hampton, Tom (Editor); Freise, Jon (Editor)

    1990-01-01

    The three year goal for this NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of rough terrain crossing, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation and path planning skills. These goals came from the concept that the robot should have the abilities of both a planetary rover and a hazardous waste site scout.

  8. Human-like robots for space and hazardous environments

    NASA Astrophysics Data System (ADS)

    Cogley, Allen; Gustafson, David; White, Warren; Dyer, Ruth; Hampton, Tom; Freise, Jon

    The three year goal for this NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of rough terrain crossing, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation and path planning skills. These goals came from the concept that the robot should have the abilities of both a planetary rover and a hazardous waste site scout.

  9. Event detection and localization for small mobile robots using reservoir computing.

    PubMed

    Antonelo, E A; Schrauwen, B; Stroobandt, D

    2008-08-01

    Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the input stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.

  10. Unified Approach To Control Of Motions Of Mobile Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1995-01-01

    Improved computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Present scheme similar to one described in "Coordinated Control of Mobile Robotic Manipulators" (NPO-19109). Both schemes based on configuration-control formalism. Present one incorporates explicit distinction between holonomic and nonholonomic constraints. Several other prior articles in NASA Tech Briefs discussed aspects of configuration-control formalism. These include "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes with Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).

  11. Embedded mobile farm robot for identification of diseased plants

    NASA Astrophysics Data System (ADS)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  12. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  13. Human-like robots for space and hazardous environments

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  14. Human-like robots for space and hazardous environments

    NASA Astrophysics Data System (ADS)

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  15. Soft Robots: Manipulation, Mobility, and Fast Actuation

    NASA Astrophysics Data System (ADS)

    Shepherd, Robert; Ilievski, Filip; Choi, Wonjae; Stokes, Adam; Morin, Stephen; Mazzeo, Aaron; Kramer, Rebecca; Majidi, Carmel; Wood, Rob; Whitesides, George

    2012-02-01

    Material innovation will be a key feature in the next generation of robots. A simple, pneumatically powered actuator composed of only soft-elastomers can perform the function of a complex arrangement of mechanical components and electric motors. This talk will focus on soft-lithography as a simple method to fabricate robots--composed of exclusively soft materials (elastomeric polymers). These robots have sophisticated capabilities: a gripper (with no electrical sensors) can manipulate delicate and irregularly shaped objects and a quadrupedal robot can walk to an obstacle (a gap smaller than its walking height) then shrink its body and squeeze through the gap using an undulatory gait. This talk will also introduce a new method of rapidly actuating soft robots. Using this new method, a robot can be caused to jump more than 30 times its height in under 200 milliseconds.

  16. Dynamic traversal of large gaps by insects and legged robots reveals a template.

    PubMed

    Gart, Sean W; Yan, Changxin; Othayoth, Ratan; Ren, Zhiyi; Li, Chen

    2018-02-02

    It is well known that animals can use neural and sensory feedback via vision, tactile sensing, and echolocation to negotiate obstacles. Similarly, most robots use deliberate or reactive planning to avoid obstacles, which relies on prior knowledge or high-fidelity sensing of the environment. However, during dynamic locomotion in complex, novel, 3D terrains, such as a forest floor and building rubble, sensing and planning suffer bandwidth limitation and large noise and are sometimes even impossible. Here, we study rapid locomotion over a large gap-a simple, ubiquitous obstacle-to begin to discover the general principles of the dynamic traversal of large 3D obstacles. We challenged the discoid cockroach and an open-loop six-legged robot to traverse a large gap of varying length. Both the animal and the robot could dynamically traverse a gap as large as one body length by bridging the gap with its head, but traversal probability decreased with gap length. Based on these observations, we developed a template that accurately captured body dynamics and quantitatively predicted traversal performance. Our template revealed that a high approach speed, initial body pitch, and initial body pitch angular velocity facilitated dynamic traversal, and successfully predicted a new strategy for using body pitch control that increased the robot's maximal traversal gap length by 50%. Our study established the first template of dynamic locomotion beyond planar surfaces, and is an important step in expanding terradynamics into complex 3D terrains.

  17. Induced vibrations facilitate traversal of cluttered obstacles

    NASA Astrophysics Data System (ADS)

    Thoms, George; Yu, Siyuan; Kang, Yucheng; Li, Chen

    When negotiating cluttered terrains such as grass-like beams, cockroaches and legged robots with rounded body shapes most often rolled their bodies to traverse narrow gaps between beams. Recent locomotion energy landscape modeling suggests that this locomotor pathway overcomes the lowest potential energy barriers. Here, we tested the hypothesis that body vibrations induced by intermittent leg-ground contact facilitate obstacle traversal by allowing exploration of locomotion energy landscape to find this lowest barrier pathway. To mimic a cockroach / legged robot pushing against two adjacent blades of grass, we developed an automated robotic system to move an ellipsoidal body into two adjacent beams, and varied body vibrations by controlling an oscillation actuator. A novel gyroscope mechanism allowed the body to freely rotate in response to interaction with the beams, and an IMU and cameras recorded the motion of the body and beams. We discovered that body vibrations facilitated body rolling, significantly increasing traversal probability and reducing traversal time (P <0.0001, ANOVA). Traversal probability increased with and traversal time decreased with beam separation. These results confirmed our hypothesis and support the plausibility of locomotion energy landscapes for understanding the formation of locomotor pathways in complex 3-D terrains.

  18. Occupancy change detection system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-01

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes instructions for producing an occupancy grid map of an environment around the robot, scanning the environment to generate a current obstacle map relative to a current robot position, and converting the current obstacle map to a current occupancy grid map. The instructions also include processing each grid cell in the occupancy grid map. Within the processing of each grid cell, the instructions include comparing each grid cell in the occupancy grid map to a corresponding grid cell in the current occupancy grid map. For grid cells with a difference, the instructions include defining a change vector for each changed grid cell, wherein the change vector includes a direction from the robot to the changed grid cell and a range from the robot to the changed grid cell.

  19. A biologically inspired neural net for trajectory formation and obstacle avoidance.

    PubMed

    Glasius, R; Komoda, A; Gielen, S C

    1996-06-01

    In this paper we present a biologically inspired two-layered neural network for trajectory formation and obstacle avoidance. The two topographically ordered neural maps consist of analog neurons having continuous dynamics. The first layer, the sensory map, receives sensory information and builds up an activity pattern which contains the optimal solution (i.e. shortest path without collisions) for any given set of current position, target positions and obstacle positions. Targets and obstacles are allowed to move, in which case the activity pattern in the sensory map will change accordingly. The time evolution of the neural activity in the second layer, the motor map, results in a moving cluster of activity, which can be interpreted as a population vector. Through the feedforward connections between the two layers, input of the sensory map directs the movement of the cluster along the optimal path from the current position of the cluster to the target position. The smooth trajectory is the result of the intrinsic dynamics of the network only. No supervisor is required. The output of the motor map can be used for direct control of an autonomous system in a cluttered environment or for control of the actuators of a biological limb or robot manipulator. The system is able to reach a target even in the presence of an external perturbation. Computer simulations of a point robot and a multi-joint manipulator illustrate the theory.

  20. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  1. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning

    PubMed Central

    Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790

  2. Autonomous robot for detecting subsurface voids and tunnels using microgravity

    NASA Astrophysics Data System (ADS)

    Wilson, Stacy S.; Crawford, Nicholas C.; Croft, Leigh Ann; Howard, Michael; Miller, Stephen; Rippy, Thomas

    2006-05-01

    Tunnels have been used to evade security of defensive positions both during times of war and peace for hundreds of years. Tunnels are presently being built under the Mexican Border by drug smugglers and possibly terrorists. Several have been discovered at the border crossing at Nogales near Tucson, Arizona, along with others at other border towns. During this war on terror, tunnels under the Mexican Border pose a significant threat for the security of the United States. It is also possible that terrorists will attempt to tunnel under strategic buildings and possibly discharge explosives. The Center for Cave and Karst Study (CCKS) at Western Kentucky University has a long and successful history of determining the location of caves and subsurface voids using microgravity technology. Currently, the CCKS is developing a remotely controlled robot which will be used to locate voids underground. The robot will be a remotely controlled vehicle that will use microgravity and GPS to accurately detect and measure voids below the surface. It is hoped that this robot will also be used in military applications to locate other types of voids underground such as tunnels and bunkers. It is anticipated that the robot will be able to function up to a mile from the operator. This paper will describe the construction of the robot and the use of microgravity technology to locate subsurface voids with the robot.

  3. Laser development for optimal helicopter obstacle warning system LADAR performance

    NASA Astrophysics Data System (ADS)

    Yaniv, A.; Krupkin, V.; Abitbol, A.; Stern, J.; Lurie, E.; German, A.; Solomonovich, S.; Lubashitz, B.; Harel, Y.; Engart, S.; Shimoni, Y.; Hezy, S.; Biltz, S.; Kaminetsky, E.; Goldberg, A.; Chocron, J.; Zuntz, N.; Zajdman, A.

    2005-04-01

    Low lying obstacles present immediate danger to both military and civilian helicopters performing low-altitude flight missions. A LADAR obstacle detection system is the natural solution for enhancing helicopter safety and improving the pilot situation awareness. Elop is currently developing an advanced Surveillance and Warning Obstacle Ranging and Display (SWORD) system for the Israeli Air Force. Several key factors and new concepts have contributed to system optimization. These include an adaptive FOV, data memorization, autonomous obstacle detection and warning algorithms and the use of an agile laser transmitter. In the present work we describe the laser design and performance and discuss some of the experimental results. Our eye-safe laser is characterized by its pulse energy, repetition rate and pulse length agility. By dynamically controlling these parameters, we are able to locally optimize the system"s obstacle detection range and scan density in accordance with the helicopter instantaneous maneuver.

  4. Tele-operated search robot for human detection using histogram of oriented objects

    NASA Astrophysics Data System (ADS)

    Cruz, Febus Reidj G.; Avendaño, Glenn O.; Manlises, Cyrel O.; Avellanosa, James Jason G.; Abina, Jyacinth Camille F.; Masaquel, Albert M.; Siapno, Michael Lance O.; Chung, Wen-Yaw

    2017-02-01

    Disasters such as typhoons, tornadoes, and earthquakes are inevitable. Aftermaths of these disasters include the missing people. Using robots with human detection capabilities to locate the missing people, can dramatically reduce the harm and risk to those who work in such circumstances. This study aims to: design and build a tele-operated robot; implement in MATLAB an algorithm for the detection of humans; and create a database of human identification based on various positions, angles, light intensity, as well as distances from which humans will be identified. Different light intensities were made by using Photoshop to simulate smoke, dust and water drops conditions. After processing the image, the system can indicate either a human is detected or not detected. Testing with bodies covered was also conducted to test the algorithm's robustness. Based on the results, the algorithm can detect humans with full body shown. For upright and lying positions, detection can happen from 8 feet to 20 feet. For sitting position, detection can happen from 2 feet to 20 feet with slight variances in results because of different lighting conditions. The distances greater than 20 feet, no humans can be processed or false negatives can occur. For bodies covered, the algorithm can detect humans in cases made under given circumstances. On three positions, humans can be detected from 0 degrees to 180 degrees under normal, with smoke, with dust, and with water droplet conditions. This study was able to design and build a tele-operated robot with MATLAB algorithm that can detect humans with an overall precision of 88.30%, from which a database was created for human identification based on various conditions, where humans will be identified.

  5. Metalevel programming in robotics: Some issues

    NASA Technical Reports Server (NTRS)

    Kumarn, A.; Parameswaran, N.

    1987-01-01

    Computing in robotics has two important requirements: efficiency and flexibility. Algorithms for robot actions are implemented usually in procedural languages such as VAL and AL. But, since their excessive bindings create inflexible structures of computation, it is proposed that Logic Programming is a more suitable language for robot programming due to its non-determinism, declarative nature, and provision for metalevel programming. Logic Programming, however, results in inefficient computations. As a solution to this problem, researchers discuss a framework in which controls can be described to improve efficiency. They have divided controls into: (1) in-code and (2) metalevel and discussed them with reference to selection of rules and dataflow. Researchers illustrated the merit of Logic Programming by modelling the motion of a robot from one point to another avoiding obstacles.

  6. From the laboratory to the soldier: providing tactical behaviors for Army robots

    NASA Astrophysics Data System (ADS)

    Knichel, David G.; Bruemmer, David J.

    2008-04-01

    The Army Future Combat System (FCS) Operational Requirement Document has identified a number of advanced robot tactical behavior requirements to enable the Future Brigade Combat Team (FBCT). The FBCT advanced tactical behaviors include Sentinel Behavior, Obstacle Avoidance Behavior, and Scaled Levels of Human-Machine control Behavior. The U.S. Army Training and Doctrine Command, (TRADOC) Maneuver Support Center (MANSCEN) has also documented a number of robotic behavior requirements for the Army non FCS forces such as the Infantry Brigade Combat Team (IBCT), Stryker Brigade Combat Team (SBCT), and Heavy Brigade Combat Team (HBCT). The general categories of useful robot tactical behaviors include Ground/Air Mobility behaviors, Tactical Mission behaviors, Manned-Unmanned Teaming behaviors, and Soldier-Robot Interface behaviors. Many DoD research and development centers are achieving the necessary components necessary for artificial tactical behaviors for ground and air robots to include the Army Research Laboratory (ARL), U.S. Army Research, Development and Engineering Command (RDECOM), Space and Naval Warfare (SPAWAR) Systems Center, US Army Tank-Automotive Research, Development and Engineering Center (TARDEC) and non DoD labs such as Department of Energy (DOL). With the support of the Joint Ground Robotics Enterprise (JGRE) through DoD and non DoD labs the Army Maneuver Support Center has recently concluded successful field trails of ground and air robots with specialized tactical behaviors and sensors to enable semi autonomous detection, reporting, and marking of explosive hazards to include Improvised Explosive Devices (IED) and landmines. A specific goal of this effort was to assess how collaborative behaviors for multiple unmanned air and ground vehicles can reduce risks to Soldiers and increase efficiency for on and off route explosive hazard detection, reporting, and marking. This paper discusses experimental results achieved with a robotic countermine system

  7. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    PubMed Central

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  8. Continuous Shape Estimation of Continuum Robots Using X-ray Images

    PubMed Central

    Lobaton, Edgar J.; Fu, Jinghua; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot’s shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints. PMID:26279960

  9. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  10. Terradynamically streamlined shapes in animals and robots enhance traversability through densely cluttered terrain.

    PubMed

    Li, Chen; Pullin, Andrew O; Haldane, Duncan W; Lam, Han K; Fearing, Ronald S; Full, Robert J

    2015-06-22

    Many animals, modern aircraft, and underwater vehicles use fusiform, streamlined body shapes that reduce fluid dynamic drag to achieve fast and effective locomotion in air and water. Similarly, numerous small terrestrial animals move through cluttered terrain where three-dimensional, multi-component obstacles like grass, shrubs, vines, and leaf litter also resist motion, but it is unknown whether their body shape plays a major role in traversal. Few ground vehicles or terrestrial robots have used body shape to more effectively traverse environments such as cluttered terrain. Here, we challenged forest-floor-dwelling discoid cockroaches (Blaberus discoidalis) possessing a thin, rounded body to traverse tall, narrowly spaced, vertical, grass-like compliant beams. Animals displayed high traversal performance (79 ± 12% probability and 3.4 ± 0.7 s time). Although we observed diverse obstacle traversal strategies, cockroaches primarily (48 ± 9% probability) used a novel roll maneuver, a form of natural parkour, allowing them to rapidly traverse obstacle gaps narrower than half their body width (2.0 ± 0.5 s traversal time). Reduction of body roundness by addition of artificial shells nearly inhibited roll maneuvers and decreased traversal performance. Inspired by this discovery, we added a thin, rounded exoskeletal shell to a legged robot with a nearly cuboidal body, common to many existing terrestrial robots. Without adding sensory feedback or changing the open-loop control, the rounded shell enabled the robot to traverse beam obstacles with gaps narrower than shell width via body roll. Such terradynamically 'streamlined' shapes can reduce terrain resistance and enhance traversability by assisting effective body reorientation via distributed mechanical feedback. Our findings highlight the need to consider body shape to improve robot mobility in real-world terrain often filled with clutter, and to develop better locomotor-ground contact models to understand

  11. Mobile robotic sensors for perimeter detection and tracking.

    PubMed

    Clark, Justin; Fierro, Rafael

    2007-02-01

    Mobile robot/sensor networks have emerged as tools for environmental monitoring, search and rescue, exploration and mapping, evaluation of civil infrastructure, and military operations. These networks consist of many sensors each equipped with embedded processors, wireless communication, and motion capabilities. This paper describes a cooperative mobile robot network capable of detecting and tracking a perimeter defined by a certain substance (e.g., a chemical spill) in the environment. Specifically, the contributions of this paper are twofold: (i) a library of simple reactive motion control algorithms and (ii) a coordination mechanism for effectively carrying out perimeter-sensing missions. The decentralized nature of the methodology implemented could potentially allow the network to scale to many sensors and to reconfigure when adding/deleting sensors. Extensive simulation results and experiments verify the validity of the proposed cooperative control scheme.

  12. A fuzzy logic controller for an autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Yen, John; Pfluger, Nathan

    1993-01-01

    The ability of a mobile robot system to plan and move intelligently in a dynamic system is needed if robots are to be useful in areas other than controlled environments. An example of a use for this system is to control an autonomous mobile robot in a space station, or other isolated area where it is hard or impossible for human life to exist for long periods of time (e.g., Mars). The system would allow the robot to be programmed to carry out the duties normally accomplished by a human being. Some of the duties that could be accomplished include operating instruments, transporting objects, and maintenance of the environment. The main focus of our early work has been on developing a fuzzy controller that takes a path and adapts it to a given environment. The robot only uses information gathered from the sensors, but retains the ability to avoid dynamically placed obstacles near and along the path. Our fuzzy logic controller is based on the following algorithm: (1) determine the desired direction of travel; (2) determine the allowed direction of travel; and (3) combine the desired and allowed directions in order to determine a direciton that is both desired and allowed. The desired direction of travel is determined by projecting ahead to a point along the path that is closer to the goal. This gives a local direction of travel for the robot and helps to avoid obstacles.

  13. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    NASA Astrophysics Data System (ADS)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  14. Navigation system for autonomous mapper robots

    NASA Astrophysics Data System (ADS)

    Halbach, Marc; Baudoin, Yvan

    1993-05-01

    This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.

  15. Virtual reality-based navigation task to reveal obstacle avoidance performance in individuals with visuospatial neglect.

    PubMed

    Aravind, Gayatri; Darekar, Anuja; Fung, Joyce; Lamontagne, Anouk

    2015-03-01

    Persons with post-stroke visuospatial neglect (VSN) often collide with moving obstacles while walking. It is not well understood whether the collisions occur as a result of attentional-perceptual deficits caused by VSN or due to post-stroke locomotor deficits. We assessed individuals with VSN on a seated, joystick-driven obstacle avoidance task, thus eliminating the influence of locomotion. Twelve participants with VSN were tested on obstacle detection and obstacle avoidance tasks in a virtual environment that included three obstacles approaching head-on or 30 (°) contralesionally/ipsilesionally. Our results indicate that in the detection task, the contralesional and head-on obstacles were detected at closer proximities compared to the ipsilesional obstacle. For the avoidance task collisions were observed only for the contralesional and head-on obstacle approaches. For the contralesional obstacle approach, participants initiated their avoidance strategies at smaller distances from the obstacle and maintained smaller minimum distances from the obstacles. The distance at detection showed a negative association with the distance at the onset of avoidance strategy for all three obstacle approaches. We conclusion the observation of collisions with contralesional and head-on obstacles, in the absence of locomotor burden, provides evidence that attentional-perceptual deficits due to VSN, independent of post-stroke locomotor deficits, alter obstacle avoidance abilities.

  16. A salient region detection model combining background distribution measure for indoor robots.

    PubMed

    Li, Na; Xu, Hui; Wang, Zhenhua; Sun, Lining; Chen, Guodong

    2017-01-01

    Vision system plays an important role in the field of indoor robot. Saliency detection methods, capturing regions that are perceived as important, are used to improve the performance of visual perception system. Most of state-of-the-art methods for saliency detection, performing outstandingly in natural images, cannot work in complicated indoor environment. Therefore, we propose a new method comprised of graph-based RGB-D segmentation, primary saliency measure, background distribution measure, and combination. Besides, region roundness is proposed to describe the compactness of a region to measure background distribution more robustly. To validate the proposed approach, eleven influential methods are compared on the DSD and ECSSD dataset. Moreover, we build a mobile robot platform for application in an actual environment, and design three different kinds of experimental constructions that are different viewpoints, illumination variations and partial occlusions. Experimental results demonstrate that our model outperforms existing methods and is useful for indoor mobile robots.

  17. Autonomous robot software development using simple software components

    NASA Astrophysics Data System (ADS)

    Burke, Thomas M.; Chung, Chan-Jin

    2004-10-01

    Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.

  18. Symmetric caging formation for convex polygonal object transportation by multiple mobile robots based on fuzzy sliding mode control.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2016-01-01

    In this paper, the problem of object caging and transporting is considered for multiple mobile robots. With the consideration of minimizing the number of robots and decreasing the rotation of the object, the proper points are calculated and assigned to the multiple mobile robots to allow them to form a symmetric caging formation. The caging formation guarantees that all of the Euclidean distances between any two adjacent robots are smaller than the minimal width of the polygonal object so that the object cannot escape. In order to avoid collision among robots, the parameter of the robots radius is utilized to design the caging formation, and the A⁎ algorithm is used so that mobile robots can move to the proper points. In order to avoid obstacles, the robots and the object are regarded as a rigid body to apply artificial potential field method. The fuzzy sliding mode control method is applied for tracking control of the nonholonomic mobile robots. Finally, the simulation and experimental results show that multiple mobile robots are able to cage and transport the polygonal object to the goal position, avoiding obstacles. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  19. High precision redundant robotic manipulator

    DOEpatents

    Young, K.K.D.

    1998-09-22

    A high precision redundant robotic manipulator for overcoming contents imposed by obstacles or imposed by a highly congested work space is disclosed. One embodiment of the manipulator has four degrees of freedom and another embodiment has seven degrees of freedom. Each of the embodiments utilize a first selective compliant assembly robot arm (SCARA) configuration to provide high stiffness in the vertical plane, a second SCARA configuration to provide high stiffness in the horizontal plane. The seven degree of freedom embodiment also utilizes kinematic redundancy to provide the capability of avoiding obstacles that lie between the base of the manipulator and the end effector or link of the manipulator. These additional three degrees of freedom are added at the wrist link of the manipulator to provide pitch, yaw and roll. The seven degrees of freedom embodiment uses one revolute point per degree of freedom. For each of the revolute joints, a harmonic gear coupled to an electric motor is introduced, and together with properly designed based servo controllers provide an end point repeatability of less than 10 microns. 3 figs.

  20. High precision redundant robotic manipulator

    DOEpatents

    Young, Kar-Keung David

    1998-01-01

    A high precision redundant robotic manipulator for overcoming contents imposed by obstacles or imposed by a highly congested work space. One embodiment of the manipulator has four degrees of freedom and another embodiment has seven degreed of freedom. Each of the embodiments utilize a first selective compliant assembly robot arm (SCARA) configuration to provide high stiffness in the vertical plane, a second SCARA configuration to provide high stiffness in the horizontal plane. The seven degree of freedom embodiment also utilizes kinematic redundancy to provide the capability of avoiding obstacles that lie between the base of the manipulator and the end effector or link of the manipulator. These additional three degrees of freedom are added at the wrist link of the manipulator to provide pitch, yaw and roll. The seven degrees of freedom embodiment uses one revolute point per degree of freedom. For each of the revolute joints, a harmonic gear coupled to an electric motor is introduced, and together with properly designed based servo controllers provide an end point repeatability of less than 10 microns.

  1. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots

    PubMed Central

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-01-01

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed. PMID:29186843

  2. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots.

    PubMed

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-11-25

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.

  3. Hardware Development for a Mobile Educational Robot.

    ERIC Educational Resources Information Center

    Mannaa, A. M.; And Others

    1987-01-01

    Describes the development of a robot whose mainframe is essentially transparent and walks on four legs. Discusses various gaits in four-legged motion. Reports on initial trials of a full-sized model without computer-control, including smoothness of motion and actual obstacle crossing features. (CW)

  4. Novel Selective Detection Method of Tumor Angiogenesis Factors Using Living Nano-Robots

    PubMed Central

    Alshraiedeh, Nida; Owies, Rami; Alshdaifat, Hala; Al-Mahaseneh, Omamah; Al-Tall, Khadijah; Alawneh, Rawan

    2017-01-01

    This paper reports a novel self-detection method for tumor cells using living nano-robots. These living robots are a nonpathogenic strain of E. coli bacteria equipped with naturally synthesized bio-nano-sensory systems that have an affinity to VEGF, an angiogenic factor overly-expressed by cancer cells. The VEGF-affinity/chemotaxis was assessed using several assays including the capillary chemotaxis assay, chemotaxis assay on soft agar, and chemotaxis assay on solid agar. In addition, a microfluidic device was developed to possibly discover tumor cells through the overexpressed vascular endothelial growth factor (VEGF). Various experiments to study the sensing characteristic of the nano-robots presented a strong response toward the VEGF. Thus, a new paradigm of selective targeting therapies for cancer can be advanced using swimming E. coli as self-navigator miniaturized robots as well as drug-delivery vehicles. PMID:28708066

  5. Novel Selective Detection Method of Tumor Angiogenesis Factors Using Living Nano-Robots.

    PubMed

    Al-Fandi, Mohamed; Alshraiedeh, Nida; Owies, Rami; Alshdaifat, Hala; Al-Mahaseneh, Omamah; Al-Tall, Khadijah; Alawneh, Rawan

    2017-07-14

    This paper reports a novel self-detection method for tumor cells using living nano-robots. These living robots are a nonpathogenic strain of E. coli bacteria equipped with naturally synthesized bio-nano-sensory systems that have an affinity to VEGF, an angiogenic factor overly-expressed by cancer cells. The VEGF-affinity/chemotaxis was assessed using several assays including the capillary chemotaxis assay, chemotaxis assay on soft agar, and chemotaxis assay on solid agar. In addition, a microfluidic device was developed to possibly discover tumor cells through the overexpressed vascular endothelial growth factor (VEGF). Various experiments to study the sensing characteristic of the nano-robots presented a strong response toward the VEGF. Thus, a new paradigm of selective targeting therapies for cancer can be advanced using swimming E. coli as self-navigator miniaturized robots as well as drug-delivery vehicles.

  6. Challenges for Service Robots-Requirements of Elderly Adults with Cognitive Impairments.

    PubMed

    Korchut, Agnieszka; Szklener, Sebastian; Abdelnour, Carla; Tantinya, Natalia; Hernández-Farigola, Joan; Ribes, Joan Carles; Skrobas, Urszula; Grabowska-Aleksandrowicz, Katarzyna; Szczęśniak-Stańczyk, Dorota; Rejdak, Konrad

    2017-01-01

    We focused on identifying the requirements and needs of people suffering from Alzheimer disease and early dementia stages with relation to robotic assistants. Based on focus groups performed in two centers (Poland and Spain), we created surveys for medical staff, patients, and caregivers, including: functional requirements; human-robot interaction, the design of the robotic assistant and user acceptance aspects. Using Likert scale and analysis made on the basis of the frequency of survey responses, we identified users' needs as high, medium, and low priority. We gathered 264 completed surveys (100 from medical staff, 81 from caregivers, and 83 from potential users). Most of the respondents, almost at the same level in each of the three groups, accept robotic assistants and their support in everyday life. High level priority functional requirements were related to reacting in emergency situations (calling for help, detecting/removing obstacles) and to reminding about medication intake, about boiling water, turning off the gas and lights (almost 60% of answers). With reference to human-robot interaction, high priority was given to voice operated system and the capability of robotic assistants to reply to simple questions. Our results help in achieving better understanding of the needs of patients with cognitive impairments during home tasks in everyday life. This way of conducting the research, with considerations for the interests of three stakeholder groups in two autonomic centers with proven experience regarding the needs of our patient groups, highlights the importance of obtained results.

  7. Robot for Investigations and Assessments of Nuclear Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanaan, Daniel; Dogny, Stephane

    RIANA is a remote controlled Robot dedicated for Investigations and Assessments of Nuclear Areas. The development of RIANA is motivated by the need to have at disposal a proven robot, tested in hot cells; a robot capable of remotely investigate and characterise the inside of nuclear facilities in order to collect efficiently all the required data in the shortest possible time. It is based on a wireless medium sized remote carrier that may carry a wide variety of interchangeable modules, sensors and tools. It is easily customised to match specific requirements and quickly configured depending on the mission and themore » operator's preferences. RIANA integrates localisation and navigation systems. The robot will be able to generate / update a 2D map of its surrounding and exploring areas. The position of the robot is given accurately on the map. Furthermore, the robot will be able to autonomously calculate, define and follow a trajectory between 2 points taking into account its environment and obstacles. The robot is configurable to manage obstacles and restrict access to forbidden areas. RIANA allows an advanced control of modules, sensors and tools; all collected data (radiological and measured data) are displayed in real time in different format (chart, on the generated map...) and stored in a single place so that may be exported in a convenient format for data processing. This modular design gives RIANA the flexibility to perform multiple investigation missions where humans cannot work such as: visual inspections, dynamic localization and 2D mapping, characterizations and nuclear measurements of floor and walls, non destructive testing, samples collection: solid and liquid. The benefits of using RIANA are: - reducing the personnel exposures by limiting the manual intervention time, - minimizing the time and reducing the cost of investigation operations, - providing critical inputs to set up and optimize cleanup and dismantling operations. (authors)« less

  8. Cartesian control of redundant robots

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.

    1989-01-01

    A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.

  9. Online Phase Detection Using Wearable Sensors for Walking with a Robotic Prosthesis

    PubMed Central

    Goršič, Maja; Kamnik, Roman; Ambrožič, Luka; Vitiello, Nicola; Lefeber, Dirk; Pasquini, Guido; Munih, Marko

    2014-01-01

    This paper presents a gait phase detection algorithm for providing feedback in walking with a robotic prosthesis. The algorithm utilizes the output signals of a wearable wireless sensory system incorporating sensorized shoe insoles and inertial measurement units attached to body segments. The principle of detecting transitions between gait phases is based on heuristic threshold rules, dividing a steady-state walking stride into four phases. For the evaluation of the algorithm, experiments with three amputees, walking with the robotic prosthesis and wearable sensors, were performed. Results show a high rate of successful detection for all four phases (the average success rate across all subjects >90%). A comparison of the proposed method to an off-line trained algorithm using hidden Markov models reveals a similar performance achieved without the need for learning dataset acquisition and previous model training. PMID:24521944

  10. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  11. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  12. Cooperative terrain model acquisition by a team of two or three point-robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, N.S.V.; Protopopescu, V.; Manickam, N.

    1996-04-01

    We address the model acquisition problem for an unknown planar terrain by a team of two or three robots. The terrain is cluttered by a finite number of polygonal obstacles whose shapes and positions are unknown. The robots are point-sized and equipped with visual sensors which acquire all visible parts of the terrain by scan operations executed from their locations. The robots communicate with each other via wireless connection. The performance is measured by the number of the sensor (scan) operations which are assumed to be the most time-consuming of all the robot operations. We employ the restricted visibility graphmore » methods in a hierarchical setup. For terrains with convex obstacles and for teams of n(= 2, 3) robots, we prove that the sensing time is reduced by a factor of 1/n. For terrains with concave corners, the performance of the algorithm depends on the number of concave regions and their depths. A hierarchical decomposition of the restricted visibility graph into n-connected and (n - 1)-or-less connected components is considered. The performance for the n(= 2, 3) robot team is expressed in terms of the sizes of n-connected components, and the sizes and diameters of (n - 1)-or-less connected components.« less

  13. Effect of cane length and swing arc width on drop-off and obstacle detection with the long cane

    PubMed Central

    Kim, Dae Shik; Emerson, Robert Wall; Naghshineh, Koorosh

    2017-01-01

    A repeated-measures design with block randomization was used for the study, in which 15 adults with visual impairments attempted to detect the drop-offs and obstacles with the canes of different lengths, swinging the cane in different widths (narrow vs wide). Participants detected the drop-offs significantly more reliably with the standard-length cane (79.5% ± 6.5% of the time) than with the extended-length cane (67.6% ± 9.1%), p <.001. The drop-off detection threshold of the standard-length cane (4.1 ± 1.1 cm) was also significantly smaller than that of the extended-length cane (6.5±1.8cm), p <.001. In addition, participants detected drop-offs at a significantly higher percentage when they swung the cane approximately 3 cm beyond the widest part of the body (78.6% ± 7.6%) than when they swung it substantially wider (30 cm; 68.5% ± 8.3%), p <.001. In contrast, neither cane length (p =.074) nor cane swing arc width (p =.185) had a significant effect on obstacle detection performance. The findings of the study may help orientation and mobility specialists recommend appropriate cane length and cane swing arc width to visually impaired cane users. PMID:29276326

  14. Effect of cane length and swing arc width on drop-off and obstacle detection with the long cane.

    PubMed

    Kim, Dae Shik; Emerson, Robert Wall; Naghshineh, Koorosh

    2017-09-01

    A repeated-measures design with block randomization was used for the study, in which 15 adults with visual impairments attempted to detect the drop-offs and obstacles with the canes of different lengths, swinging the cane in different widths (narrow vs wide). Participants detected the drop-offs significantly more reliably with the standard-length cane (79.5% ± 6.5% of the time) than with the extended-length cane (67.6% ± 9.1%), p <.001. The drop-off detection threshold of the standard-length cane (4.1 ± 1.1 cm) was also significantly smaller than that of the extended-length cane (6.5±1.8cm), p <.001. In addition, participants detected drop-offs at a significantly higher percentage when they swung the cane approximately 3 cm beyond the widest part of the body (78.6% ± 7.6%) than when they swung it substantially wider (30 cm; 68.5% ± 8.3%), p <.001. In contrast, neither cane length ( p =.074) nor cane swing arc width ( p =.185) had a significant effect on obstacle detection performance. The findings of the study may help orientation and mobility specialists recommend appropriate cane length and cane swing arc width to visually impaired cane users.

  15. Stereo and photometric image sequence interpretation for detecting negative obstacles using active gaze control and performing an autonomous jink

    NASA Astrophysics Data System (ADS)

    Hofmann, Ulrich; Siedersberger, Karl-Heinz

    2003-09-01

    Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.

  16. An Intelligent Robotic Hospital Bed for Safe Transportation of Critical Neurosurgery Patients Along Crowded Hospital Corridors.

    PubMed

    Wang, Chao; Savkin, Andrey V; Clout, Ray; Nguyen, Hung T

    2015-09-01

    We present a novel design of an intelligent robotic hospital bed, named Flexbed, with autonomous navigation ability. The robotic bed is developed for fast and safe transportation of critical neurosurgery patients without changing beds. Flexbed is more efficient and safe during the transportation process comparing to the conventional hospital beds. Flexbed is able to avoid en-route obstacles with an efficient easy-to-implement collision avoidance strategy when an obstacle is nearby and to move towards its destination at maximum speed when there is no threat of collision. We present extensive simulation results of navigation of Flexbed in the crowded hospital corridor environments with moving obstacles. Moreover, results of experiments with Flexbed in the real world scenarios are also presented and discussed.

  17. Hardware platform for multiple mobile robots

    NASA Astrophysics Data System (ADS)

    Parzhuber, Otto; Dolinsky, D.

    2004-12-01

    This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.

  18. Interactive-rate Motion Planning for Concentric Tube Robots

    PubMed Central

    Torres, Luis G.; Baykal, Cenk; Alterovitz, Ron

    2014-01-01

    Concentric tube robots may enable new, safer minimally invasive surgical procedures by moving along curved paths to reach difficult-to-reach sites in a patient’s anatomy. Operating these devices is challenging due to their complex, unintuitive kinematics and the need to avoid sensitive structures in the anatomy. In this paper, we present a motion planning method that computes collision-free motion plans for concentric tube robots at interactive rates. Our method’s high speed enables a user to continuously and freely move the robot’s tip while the motion planner ensures that the robot’s shaft does not collide with any anatomical obstacles. Our approach uses a highly accurate mechanical model of tube interactions, which is important since small movements of the tip position may require large changes in the shape of the device’s shaft. Our motion planner achieves its high speed and accuracy by combining offline precomputation of a collision-free roadmap with online position control. We demonstrate our interactive planner in a simulated neurosurgical scenario where a user guides the robot’s tip through the environment while the robot automatically avoids collisions with the anatomical obstacles. PMID:25436176

  19. Serendipitous Offline Learning in a Neuromorphic Robot.

    PubMed

    Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg

    2016-01-01

    We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.

  20. An intelligent, free-flying robot

    NASA Technical Reports Server (NTRS)

    Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.

    1988-01-01

    The ground-based demonstration of EVA Retriever, a voice-supervised, intelligent, free-flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out, (2) searches for and acquires the target, (3) plans and executes a rendezvous while continuously tracking the target, (4) avoids stationary and moving obstacles, (5) reaches for and grapples the target, (6) returns to transfer the object, and (7) returns to base.

  1. Navigable points estimation for mobile robots using binary image skeletonization

    NASA Astrophysics Data System (ADS)

    Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman

    2017-02-01

    This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.

  2. SAFER vehicle inspection: a multimodal robotic sensing platform

    NASA Astrophysics Data System (ADS)

    Page, David L.; Fougerolle, Yohan; Koschan, Andreas F.; Gribok, Andrei; Abidi, Mongi A.; Gorsich, David J.; Gerhart, Grant R.

    2004-09-01

    The current threats to U.S. security both military and civilian have led to an increased interest in the development of technologies to safeguard national facilities such as military bases, federal buildings, nuclear power plants, and national laboratories. As a result, the Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at The University of Tennessee (UT) has established a research consortium, known as SAFER (Security Automation and Future Electromotive Robotics), to develop, test, and deploy sensing and imaging systems for unmanned ground vehicles (UGV). The targeted missions for these UGV systems include -- but are not limited to --under vehicle threat assessment, stand-off check-point inspections, scout surveillance, intruder detection, obstacle-breach situations, and render-safe scenarios. This paper presents a general overview of the SAFER project. Beyond this general overview, we further focus on a specific problem where we collect 3D range scans of under vehicle carriages. These scans require appropriate segmentation and representation algorithms to facilitate the vehicle inspection process. We discuss the theory for these algorithms and present results from applying them to actual vehicle scans.

  3. Laser radar system for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Bers, Karlheinz; Schulz, Karl R.; Armbruster, Walter

    2005-09-01

    The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser radars which are build by the EADS company and presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from objects at distances of military relevance with a high hit-and-detect probability. The development of advanced 3d-scene analysis algorithms had increased the recognition probability and reduced the false alarm rate by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The sensor system and the implemented algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition. This paper describes different 3D-imaging ladar sensors with unique system architecture but different components matched for different military application. Emphasis is laid on an obstacle warning system with a high probability of detection of thin wires, the real time processing of the measured range image data, obstacle classification und visualization.

  4. An intelligent approach to welding robot selection

    NASA Astrophysics Data System (ADS)

    Milano, J.; Mauk, S. D.; Flitter, L.; Morris, R.

    1993-10-01

    In a shipyard where multiple stationary and mobile workcells are employed in the fabrication of components of complex sub-assemblies,efficient operation requires an intelligent method of scheduling jobs and selecting workcells based on optimum throughput and cost. The achievement of this global solution requires the successful organization of resource availability,process requirements,and process constraints. The Off-line Planner (OLP) of the Programmable Automated Weld Systemd (PAWS) is capable of advanced modeling of weld processes and environments as well as the generation of complete weld procedures. These capabilities involve the integration of advanced Computer Aided Design (CAD), path planning, and obstacle detection and avoidance techniques as well as the synthesis of complex design and process information. These existing capabilities provide the basis of the functionality required for the successful implementation of an intelligent weld robot selector and material flow planner. Current efforts are focused on robot selection via the dynamic routing of components to the appropriate work cells. It is proposed that this problem is a variant of the “Traveling Salesman Problem” (TSP) that has been proven to belong to a larger set of optimization problems termed nondeterministic polynomial complete (NP complete). In this paper, a heuristic approach utilizing recurrent neural networks is explored as a rapid means of producing a near optimal, if not optimal, bdweld robot selection.

  5. Emergent of Burden Sharing of Robots with Emotion Model

    NASA Astrophysics Data System (ADS)

    Kusano, Takuya; Nozawa, Akio; Ide, Hideto

    Cooperated multi robots system has much dominance in comparison with single robot system. Multi robots system is able to adapt to various circumstances and has a flexibility for variation of tasks. Robots are necessary that build a cooperative relations and acts as an organization to attain a purpose in multi robots system. Then, group behavior of insects which doesn't have advanced ability is observed. For example, ants called a sociality insect emerge systematic activities by the interaction with using a very simple way. Though ants make a communication with chemical matter, a human plans a communication by words and gestures. In this paper, we paid attention to the interaction based on psychological viewpoint. And a human's emotion model was used for the parameter which became a base of the motion planning of robots. These robots were made to do both-way action in test field with obstacle. As a result, a burden sharing like guide or carrier was seen even though those had a simple setup.

  6. The 1991-1992 walking robot design

    NASA Technical Reports Server (NTRS)

    Azarm, Shapour; Dayawansa, Wijesurija; Tsai, Lung-Wen; Peritt, Jon

    1992-01-01

    The University of Maryland Walking Machine team designed and constructed a robot. This robot was completed in two phases with supervision and suggestions from three professors and one graduate teaching assistant. Bob was designed during the Fall Semester 1991, then machined, assembled, and debugged in the Spring Semester 1992. The project required a total of 4,300 student hours and cost under $8,000. Mechanically, Bob was an exercise in optimization. The robot was designed to test several diverse aspects of robotic potential, including speed, agility, and stability, with simplicity and reliability holding equal importance. For speed and smooth walking motion, the footpath contained a long horizontal component; a vertical aspect was included to allow clearance of obstacles. These challenges were met with a leg design that utilized a unique multi-link mechanism which traveled a modified tear-drop footpath. The electrical requirements included motor, encoder, and voice control circuitry selection, manual controller manufacture, and creation of sensors for guidance. Further, there was also a need for selection of the computer, completion of a preliminary program, and testing of the robot.

  7. Training a Network of Electronic Neurons for Control of a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Vromen, T. G. M.; Steur, E.; Nijmeijer, H.

    An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.

  8. Determining the feasibility of robotic courier medication delivery in a hospital setting.

    PubMed

    Kirschling, Thomas E; Rough, Steve S; Ludwig, Brad C

    2009-10-01

    The feasibility of a robotic courier medication delivery system in a hospital setting was evaluated. Robotic couriers are self-guiding, self-propelling robots that navigate hallways and elevators to pull an attached or integrated cart to a desired destination. A robotic courier medication delivery system was pilot tested in two patient care units at a 471-bed tertiary care academic medical center. Average transit for the existing manual medication delivery system hourly hospitalwide deliveries was 32.6 minutes. Of this, 32.3% was spent at the patient care unit and 67.7% was spent pushing the cart or waiting at an elevator. The robotic courier medication delivery system traveled as fast as 1.65 ft/sec (52% speed of the manual system) in the absence of barriers but moved at an average rate of 0.84 ft/sec (26% speed of the manual system) during the study, primarily due to hallway obstacles. The robotic courier was utilized for 50% of the possible 1750 runs during the 125-day pilot due to technical or situational difficulties. Of the runs that were sent, a total of 79 runs failed, yielding an overall 91% success rate. During the final month of the pilot, the success rate reached 95.6%. Customer satisfaction with the traditional manual delivery system was high. Customer satisfaction with deliveries declined after implementation of the robotic courier medication distribution system. A robotic courier medication delivery system was implemented but was not expanded beyond the two pilot units. Challenges of implementation included ongoing education on how to properly move the robotic courier and keeping the hallway clear of obstacles.

  9. Memristive device based learning for navigation in robots.

    PubMed

    Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A

    2017-11-08

    Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.

  10. Realtime motion planning for a mobile robot in an unknown environment using a neurofuzzy based approach

    NASA Astrophysics Data System (ADS)

    Zheng, Taixiong

    2005-12-01

    A neuro-fuzzy network based approach for robot motion in an unknown environment was proposed. In order to control the robot motion in an unknown environment, the behavior of the robot was classified into moving to the goal and avoiding obstacles. Then, according to the dynamics of the robot and the behavior character of the robot in an unknown environment, fuzzy control rules were introduced to control the robot motion. At last, a 6-layer neuro-fuzzy network was designed to merge from what the robot sensed to robot motion control. After being trained, the network may be used for robot motion control. Simulation results show that the proposed approach is effective for robot motion control in unknown environment.

  11. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    PubMed

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  12. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    PubMed Central

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  13. People detection method using graphics processing units for a mobile robot with an omnidirectional camera

    NASA Astrophysics Data System (ADS)

    Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki

    2011-12-01

    This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.

  14. Mobile robots exploration through cnn-based reinforcement learning.

    PubMed

    Tai, Lei; Liu, Ming

    2016-01-01

    Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.

  15. Event-Based Control Strategy for Mobile Robots in Wireless Environments.

    PubMed

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-12-02

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy.

  16. Event-Based Control Strategy for Mobile Robots in Wireless Environments

    PubMed Central

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-01-01

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy. PMID:26633412

  17. Bio-inspired Computing for Robots

    NASA Technical Reports Server (NTRS)

    Laufenberg, Larry

    2003-01-01

    Living creatures may provide algorithms to enable active sensing/control systems in robots. Active sensing could enable planetary rovers to feel their way in unknown environments. The surface of Jupiter's moon Europa consists of fractured ice over a liquid sea that may contain microbes similar to those on Earth. To explore such extreme environments, NASA needs robots that autonomously survive, navigate, and gather scientific data. They will be too far away for guidance from Earth. They must sense their environment and control their own movements to avoid obstacles or investigate a science opportunity. To meet this challenge, CICT's Information Technology Strategic Research (ITSR) Project is funding neurobiologists at NASA's Jet Propulsion Laboratory (JPL) and selected universities to search for biologically inspired algorithms that enable robust active sensing and control for exploratory robots. Sources for these algorithms are living creatures, including rats and electric fish.

  18. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  19. An intelligent, free-flying robot

    NASA Technical Reports Server (NTRS)

    Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.; Phinney, Dale E.

    1989-01-01

    The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base.

  20. Body-terrain interaction affects large bump traversal of insects and legged robots.

    PubMed

    Gart, Sean W; Li, Chen

    2018-02-02

    Small animals and robots must often rapidly traverse large bump-like obstacles when moving through complex 3D terrains, during which, in addition to leg-ground contact, their body inevitably comes into physical contact with the obstacles. However, we know little about the performance limits of large bump traversal and how body-terrain interaction affects traversal. To address these, we challenged the discoid cockroach and an open-loop six-legged robot to dynamically run into a large bump of varying height to discover the maximal traversal performance, and studied how locomotor modes and traversal performance are affected by body-terrain interaction. Remarkably, during rapid running, both the animal and the robot were capable of dynamically traversing a bump much higher than its hip height (up to 4 times the hip height for the animal and 3 times for the robot, respectively) at traversal speeds typical of running, with decreasing traversal probability with increasing bump height. A stability analysis using a novel locomotion energy landscape model explained why traversal was more likely when the animal or robot approached the bump with a low initial body yaw and a high initial body pitch, and why deflection was more likely otherwise. Inspired by these principles, we demonstrated a novel control strategy of active body pitching that increased the robot's maximal traversable bump height by 75%. Our study is a major step in establishing the framework of locomotion energy landscapes to understand locomotion in complex 3D terrains.

  1. Development of RadRob15, A Robot for Detecting Radioactive Contamination in Nuclear Medicine Departments.

    PubMed

    Shafe, A; Mortazavi, S M J; Joharnia, A; Safaeyan, Gh H

    2016-09-01

    Accidental or intentional release of radioactive materials into the living or working environment may cause radioactive contamination. In nuclear medicine departments, radioactive contamination is usually due to radionuclides which emit high energy gamma photons and particles. These radionuclides have a broad range of energies and penetration capabilities. Rapid detection of radioactive contamination is very important for efficient removing of the contamination without spreading the radionuclides. A quick scan of the contaminated area helps health physicists locate the contaminated area and assess the level of activity. Studies performed in IR Iran shows that in some nuclear medicine departments, areas with relatively high levels of activity can be found. The highest contamination level was detected in corridors which are usually used by patients. To monitor radioactive contamination in nuclear medicine departments, RadRob15, a contamination detecting robot was developed in the Ionizing and Non-ionizing Radiation Protection Research Center (INIRPRC). The motor vehicle scanner and the gas radiation detector are the main components of this robot. The detection limit of this robot has enabled it to detect low levels of radioactive contamination. Our preliminary tests show that RadRob15 can be easily used in nuclear medicine departments as a device for quick surveys which identifies the presence or absence of radioactive contamination.

  2. How do walkers avoid a mobile robot crossing their way?

    PubMed

    Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien

    2017-01-01

    Robots and Humans have to share the same environment more and more often. In the aim of steering robots in a safe and convenient manner among humans it is required to understand how humans interact with them. This work focuses on collision avoidance between a human and a robot during locomotion. Having in mind previous results on human obstacle avoidance, as well as the description of the main principles which guide collision avoidance strategies, we observe how humans adapt a goal-directed locomotion task when they have to interfere with a mobile robot. Our results show differences in the strategy set by humans to avoid a robot in comparison with avoiding another human. Humans prefer to give the way to the robot even when they are likely to pass first at the beginning of the interaction. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Two arm robot path planning in a static environment using polytopes and string stretching. Thesis

    NASA Technical Reports Server (NTRS)

    Schima, Francis J., III

    1990-01-01

    The two arm robot path planning problem has been analyzed and reduced into components to be simplified. This thesis examines one component in which two Puma-560 robot arms are simultaneously holding a single object. The problem is to find a path between two points around obstacles which is relatively fast and minimizes the distance. The thesis involves creating a structure on which to form an advanced path planning algorithm which could ideally find the optimum path. An actual path planning method is implemented which is simple though effective in most common situations. Given the limits of computer technology, a 'good' path is currently found. Objects in the workspace are modeled with polytopes. These are used because they can be used for rapid collision detection and still provide a representation which is adequate for path planning.

  4. Three-dimensional obstacle classification in laser range data

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter; Bers, Karl-Heinz

    1998-10-01

    The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser rangefinders which are presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from wires at over 500 m range (depends on the system) with a high hit-and-detect probability. Despite the efficiency of the sensor, acceptance of current obstacle warning systems by test pilots is not very high, mainly due to the systems' inadequacies in obstacle recognition and visualization. This has motivated the development and the testing of more advanced 3d-scene analysis algorithm at FGAN-FIM to replace the obstacle recognition component of current warning systems. The basic ideas are to increase the recognition probability and to reduce the false alarm rate for hard-to-extract obstacles such as wires, by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. by implementing a hierarchical classification procedure to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition.

  5. Aviation obstacle auto-extraction using remote sensing information

    NASA Astrophysics Data System (ADS)

    Zimmer, N.; Lugsch, W.; Ravenscroft, D.; Schiefele, J.

    2008-10-01

    An Obstacle, in the aviation context, may be any natural, man-made, fixed or movable object, permanent or temporary. Currently, the most common way to detect relevant aviation obstacles from an aircraft or helicopter for navigation purposes and collision avoidance is the use of merged infrared and synthetic information of obstacle data. Several algorithms have been established to utilize synthetic and infrared images to generate obstacle information. There might be a situation however where the system is error-prone and may not be able to consistently determine the current environment. This situation can be avoided when the system knows the true position of the obstacle. The quality characteristics of the obstacle data strongly depends on the quality of the source data such as maps and official publications. In some countries such as newly industrializing and developing countries, quality and quantity of obstacle information is not available. The aviation world has two specifications - RTCA DO-276A and ICAO ANNEX 15 Ch. 10 - which describe the requirements for aviation obstacles. It is essential to meet these requirements to be compliant with the specifications and to support systems based on these specifications, e.g. 3D obstacle warning systems where accurate coordinates based on WGS-84 is a necessity. Existing aerial and satellite or soon to exist high quality remote sensing data makes it feasible to think about automated aviation obstacle data origination. This paper will describe the feasibility to auto-extract aviation obstacles from remote sensing data considering limitations of image and extraction technologies. Quality parameters and possible resolution of auto-extracted obstacle data will be discussed and presented.

  6. Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor

    NASA Astrophysics Data System (ADS)

    Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso

    2018-04-01

    Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.

  7. Towards a sustainable modular robot system for planetary exploration

    NASA Astrophysics Data System (ADS)

    Hossain, S. G. M.

    This thesis investigates multiple perspectives of developing an unmanned robotic system suited for planetary terrains. In this case, the unmanned system consists of unit-modular robots. This type of robot has potential to be developed and maintained as a sustainable multi-robot system while located far from direct human intervention. Some characteristics that make this possible are: the cooperation, communication and connectivity among the robot modules, flexibility of individual robot modules, capability of self-healing in the case of a failed module and the ability to generate multiple gaits by means of reconfiguration. To demonstrate the effects of high flexibility of an individual robot module, multiple modules of a four-degree-of-freedom unit-modular robot were developed. The robot was equipped with a novel connector mechanism that made self-healing possible. Also, design strategies included the use of series elastic actuators for better robot-terrain interaction. In addition, various locomotion gaits were generated and explored using the robot modules, which is essential for a modular robot system to achieve robustness and thus successfully navigate and function in a planetary environment. To investigate multi-robot task completion, a biomimetic cooperative load transportation algorithm was developed and simulated. Also, a liquid motion-inspired theory was developed consisting of a large number of robot modules. This can be used to traverse obstacles that inevitably occur in maneuvering over rough terrains such as in a planetary exploration. Keywords: Modular robot, cooperative robots, biomimetics, planetary exploration, sustainability.

  8. Robot-Assisted Retinal Vein Cannulation with Force-Based Puncture Detection: Micron vs. the Steady-Hand Eye Robot*

    PubMed Central

    Gonenc, Berk; Tran, Nhat; Gehlbach, Peter; Taylor, Russell H.; Iordachita, Iulian

    2018-01-01

    Retinal vein cannulation is a demanding procedure where therapeutic agents are injected into occluded retina veins. The feasibility of this treatment is limited due to challenges in identifying the moment of venous puncture, achieving cannulation and maintaining it throughout the drug delivery period. In this study, we integrate a force-sensing microneedle with two distinct robotic systems: the handheld micromanipulator Micron, and the cooperatively controlled Steady-Hand Eye Robot (SHER). The sensed tool-to-tissue interaction forces are used to detect venous puncture and extend the robots’ standard control schemes with a new position holding mode (PHM) that assists the operator hold the needle position fixed and maintain cannulation for a longer time with less trauma on the vasculature. We evaluate the resulting systems comparatively in a dry phantom, stretched vinyl membranes. Results have shown that modulating the admittance control gain of SHER alone is not a very effective solution for preventing the undesired tool motion after puncture. However, after using puncture detection and PHM the deviation from the puncture point is significantly reduced, by 65% with Micron, and by 95% with SHER representing a potential advantage over freehand for both. PMID:28269417

  9. Bio-inspired vision based robot control using featureless estimations of time-to-contact.

    PubMed

    Zhang, Haijie; Zhao, Jianguo

    2017-01-31

    Marvelous vision based dynamic behaviors of insects and birds such as perching, landing, and obstacle avoidance have inspired scientists to propose the idea of time-to-contact, which is defined as the time for a moving observer to contact an object or surface if the current velocity is maintained. Since with only a vision sensor, time-to-contact can be directly estimated from consecutive images, it is widely used for a variety of robots to fulfill various tasks such as obstacle avoidance, docking, chasing, perching and landing. However, most of existing methods to estimate the time-to-contact need to extract and track features during the control process, which is time-consuming and cannot be applied to robots with limited computation power. In this paper, we adopt a featureless estimation method, extend this method to more general settings with angular velocities, and improve the estimation results using Kalman filtering. Further, we design an error based controller with gain scheduling strategy to control the motion of mobile robots. Experiments for both estimation and control are conducted using a customized mobile robot platform with low-cost embedded systems. Onboard experimental results demonstrate the effectiveness of the proposed approach, with the robot being controlled to successfully dock in front of a vertical wall. The estimation and control methods presented in this paper can be applied to computation-constrained miniature robots for agile locomotion such as landing, docking, or navigation.

  10. Collision-based energetic comparison of rolling and hopping over obstacles

    PubMed Central

    Iida, Fumiya

    2018-01-01

    Locomotion of machines and robots operating in rough terrain is strongly influenced by the mechanics of the ground-machine interactions. A rolling wheel in terrain with obstacles is subject to collisional energy losses, which is governed by mechanics comparable to hopping or walking locomotion. Here we investigate the energetic cost associated with overcoming an obstacle for rolling and hopping locomotion, using a simple mechanics model. The model considers collision-based interactions with the ground and the obstacle, without frictional losses, and we quantify, analyse, and compare the sources of energetic costs for three locomotion strategies. Our results show that the energetic advantages of the locomotion strategies are uniquely defined given the moment of inertia and the Froude number associated with the system. We find that hopping outperforms rolling at larger Froude numbers and vice versa. The analysis is further extended for a comparative study with animals. By applying size and inertial properties through an allometric scaling law of hopping and trotting animals to our models, we found that the conditions at which hopping becomes energetically advantageous to rolling roughly corresponds to animals’ preferred gait transition speeds. The energetic collision losses as predicted by the model are largely verified experimentally. PMID:29538459

  11. Detecting and Classifying Human Touches in a Social Robot Through Acoustic Sensing and Machine Learning.

    PubMed

    Alonso-Martín, Fernando; Gamboa-Montero, Juan José; Castillo, José Carlos; Castro-González, Álvaro; Salichs, Miguel Ángel

    2017-05-16

    An important aspect in Human-Robot Interaction is responding to different kinds of touch stimuli. To date, several technologies have been explored to determine how a touch is perceived by a social robot, usually placing a large number of sensors throughout the robot's shell. In this work, we introduce a novel approach, where the audio acquired from contact microphones located in the robot's shell is processed using machine learning techniques to distinguish between different types of touches. The system is able to determine when the robot is touched (touch detection), and to ascertain the kind of touch performed among a set of possibilities: stroke , tap , slap , and tickle (touch classification). This proposal is cost-effective since just a few microphones are able to cover the whole robot's shell since a single microphone is enough to cover each solid part of the robot. Besides, it is easy to install and configure as it just requires a contact surface to attach the microphone to the robot's shell and plug it into the robot's computer. Results show the high accuracy scores in touch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best performance, with an F -score of 0.81. The dataset was built with information from 25 participants performing a total of 1981 touch gestures.

  12. A locust-inspired miniature jumping robot.

    PubMed

    Zaitsev, Valentin; Gvirsman, Omer; Ben Hanan, Uri; Weiss, Avi; Ayali, Amir; Kosa, Gabor

    2015-11-25

    Unmanned ground vehicles are mostly wheeled, tracked, or legged. These locomotion mechanisms have a limited ability to traverse rough terrain and obstacles that are higher than the robot's center of mass. In order to improve the mobility of small robots it is necessary to expand the variety of their motion gaits. Jumping is one of nature's solutions to the challenge of mobility in difficult terrain. The desert locust is the model for the presented bio-inspired design of a jumping mechanism for a small mobile robot. The basic mechanism is similar to that of the semilunar process in the hind legs of the locust, and is based on the cocking of a torsional spring by wrapping a tendon-like wire around the shaft of a miniature motor. In this study we present the jumping mechanism design, and the manufacturing and performance analysis of two demonstrator prototypes. The most advanced jumping robot demonstrator is power autonomous, weighs 23 gr, and is capable of jumping to a height of 3.35 m, covering a distance of 1.37 m.

  13. Robotic vehicle uses acoustic sensors for voice detection and diagnostics

    NASA Astrophysics Data System (ADS)

    Young, Stuart H.; Scanlon, Michael V.

    2000-07-01

    An acoustic sensor array that cues an imaging system on a small tele- operated robotic vehicle was used to detect human voice and activity inside a building. The advantage of acoustic sensors is that it is a non-line of sight (NLOS) sensing technology that can augment traditional LOS sensors such as visible and IR cameras. Acoustic energy emitted from a target, such as from a person, weapon, or radio, will travel through walls and smoke, around corners, and down corridors, whereas these obstructions would cripple an imaging detection system. The hardware developed and tested used an array of eight microphones to detect the loudest direction and automatically setter a camera's pan/tilt toward the noise centroid. This type of system has applicability for counter sniper applications, building clearing, and search/rescue. Data presented will be time-frequency representations showing voice detected within rooms and down hallways at various ranges. Another benefit of acoustics is that it provides the tele-operator some situational awareness clues via low-bandwidth transmission of raw audio data for the operator to interpret with either headphones or through time-frequency analysis. This data can be useful to recognize familiar sounds that might indicate the presence of personnel, such as talking, equipment, movement noise, etc. The same array also detects the sounds of the robot it is mounted on, and can be useful for engine diagnostics and trouble shooting, or for self-noise emanations for stealthy travel. Data presented will characterize vehicle self noise over various surfaces such as tiles, carpets, pavement, sidewalk, and grass. Vehicle diagnostic sounds will indicate a slipping clutch and repeated unexpected application of emergency braking mechanism.

  14. Thermal Image Sensing Model for Robotic Planning and Search.

    PubMed

    Castro Jiménez, Lídice E; Martínez-García, Edgar A

    2016-08-08

    This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image's intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot's course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach.

  15. Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed

    NASA Technical Reports Server (NTRS)

    Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles

    2016-01-01

    autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management.

  16. Novel approaches to helicopter obstacle warning

    NASA Astrophysics Data System (ADS)

    Seidel, Christian; Samuelis, Christian; Wegner, Matthias; Münsterer, Thomas; Rumpf, Thomas; Schwartz, Ingo

    2006-05-01

    EADS Germany is the world market leader in commercial Helicopter Laser Radar (HELLAS) Obstacle Warning Systems. The HELLAS-Warning System has been introduced into the market in 2000, is in service at German Border Control (Bundespolizei) and Royal Thai Airforce and is successfully evaluated by the Foreign Comparative Test Program (FCT) of the USSOCOM. Currently the successor system HELLAS-Awareness is in development. It will have extended sensor performance, enhanced realtime data processing capabilities and advanced HMI features. We will give an outline of the new sensor unit concerning detection technology and helicopter integration aspects. The system provides a widespread field of view with additional dynamic line of sight steering and a large detection range in combination with a high frame rate of 3Hz. The workflow of the data processing will be presented with focus on novel filter techniques and obstacle classification methods. As commonly known the former are indispensable due to unavoidable statistical measuring errors and solarisation. The amount of information in the filtered raw data is further reduced by ground segmentation. The remaining raised objects are extracted and classified in several stages into different obstacle classes. We will show the prioritization function which orders the obstacles concerning to their threat potential to the helicopter taking into account the actual flight dynamics. The priority of an object determines the display and provision of warnings to the pilot. Possible HMI representation includes video or FLIR overlay on multifunction displays, audio warnings and visualization of information on helmet mounted displays and digital maps. Different concepts will be presented.

  17. Robots that can adapt like animals.

    PubMed

    Cully, Antoine; Clune, Jeff; Tarapore, Danesh; Mouret, Jean-Baptiste

    2015-05-28

    Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot 'think outside the box' to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot's prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles

  18. Robots that can adapt like animals

    NASA Astrophysics Data System (ADS)

    Cully, Antoine; Clune, Jeff; Tarapore, Danesh; Mouret, Jean-Baptiste

    2015-05-01

    Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot `think outside the box' to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot's prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles

  19. Automatic planning of needle placement for robot-assisted percutaneous procedures.

    PubMed

    Belbachir, Esia; Golkar, Ehsan; Bayle, Bernard; Essert, Caroline

    2018-04-18

    Percutaneous procedures allow interventional radiologists to perform diagnoses or treatments guided by an imaging device, typically a computed tomography (CT) scanner with a high spatial resolution. To reduce exposure to radiations and improve accuracy, robotic assistance to needle insertion is considered in the case of X-ray guided procedures. We introduce a planning algorithm that computes a needle placement compatible with both the patient's anatomy and the accessibility of the robot within the scanner gantry. Our preoperative planning approach is based on inverse kinematics, fast collision detection, and bidirectional rapidly exploring random trees coupled with an efficient strategy of node addition. The algorithm computes the allowed needle entry zones over the patient's skin (accessibility map) from 3D models of the patient's anatomy, the environment (CT, bed), and the robot. The result includes the admissible robot joint path to target the prescribed internal point, through the entry point. A retrospective study was performed on 16 patients datasets in different conditions: without robot (WR) and with the robot on the left or the right side of the bed (RL/RR). We provide an accessibility map ensuring a collision-free path of the robot and allowing for a needle placement compatible with the patient's anatomy. The result is obtained in an average time of about 1 min, even in difficult cases. The accessibility maps of RL and RR covered about a half of the surface of WR map in average, which offers a variety of options to insert the needle with the robot. We also measured the average distance between the needle and major obstacles such as the vessels and found that RL and RR produced needle placements almost as safe as WR. The introduced planning method helped us prove that it is possible to use such a "general purpose" redundant manipulator equipped with a dedicated tool to perform percutaneous interventions in cluttered spaces like a CT gantry.

  20. A bio-inspired electrocommunication system for small underwater robots.

    PubMed

    Wang, Wei; Liu, Jindong; Xie, Guangming; Wen, Li; Zhang, Jianwei

    2017-03-29

    Weakly electric fishes (Gymnotid and Mormyrid) use an electric field to communicate efficiently (termed electrocommunication) in the turbid waters of confined spaces where other communication modalities fail. Inspired by this biological phenomenon, we design an artificial electrocommunication system for small underwater robots and explore the capabilities of such an underwater robotic communication system. An analytical model for electrocommunication is derived to predict the effect of the key parameters such as electrode distance and emitter current of the system on the communication performance. According to this model, a low-dissipation, and small-sized electrocommunication system is proposed and integrated into a small robotic fish. We characterize the communication performance of the robot in still water, flowing water, water with obstacles and natural water conditions. The results show that underwater robots are able to communicate electrically at a speed of around 1 k baud within about 3 m with a low power consumption (less than 1 W). In addition, we demonstrate that two leader-follower robots successfully achieve motion synchronization through electrocommunication in the three-dimensional underwater space, indicating that this bio-inspired electrocommunication system is a promising setup for the interaction of small underwater robots.

  1. GOAT (goes over all terrain) vehicle: a scaleable robotic vehicle

    NASA Astrophysics Data System (ADS)

    Dodson, Michael G.; Owsley, Stanley L.; Moorehead, Stewart J.

    2003-09-01

    Many of the potential applications of mobile robots require a small to medium sized vehicle that is capable of traversing large obstacles and rugged terrain. Search and rescue operations require a robot small enough to drive through doorways, yet capable enough to surmount rubble piles and stairs. This paper presents the GOAT (Goes Over All Terrain) vehicle, a medium scale robot which incorporates a novel configuration which puts the drive wheels on the ends of actuated arms. This allows GOAT to adjust body height and posture and combines the benefits of legged locomotion with the ease of wheeled driving. The paper presents the design of the GOAT and the results of prototype construction and initial testing.

  2. Adaptive Gait Control for a Quadruped Robot on 3D Path Planning

    NASA Astrophysics Data System (ADS)

    Igarashi, Hiroshi; Kakikura, Masayoshi

    A legged walking robot is able to not only move on irregular terrain but also change its posture. For example, the robot can pass under overhead obstacles by crouching. The purpose of our research is to realize efficient path planning with a quadruped robot. Therefore, the path planning is expected to extended in three dimensions because of the mobility. However, some issues of the quadruped robot, which are instability, workspace limitation, deadlock and slippage, complicate realizing such application. In order to improve these issues and reinforce the mobility, a new static gait pattern for a quadruped robot, called TFG: Trajectory Following Gait, is proposed. The TFG intends to obtain high controllability like a wheel robot. Additionally, the TFG allows to change it posture during the walk. In this paper, some experimental results show that the TFG improves the issues and it is available for efficient locomotion in three dimensional environment.

  3. Towards a model of temporal attention for on-line learning in a mobile robot

    NASA Astrophysics Data System (ADS)

    Marom, Yuval; Hayes, Gillian

    2001-06-01

    We present a simple attention system, capable of bottom-up signal detection adaptive to subjective internal needs. The system is used by a robotic agent, learning to perform phototaxis and obstacle avoidance by following a teacher agent around a simulated environment, and deciding when to form associations between perceived information and imitated actions. We refer to this kind of decision-making as on-line temporal attention. The main role of the attention system is perception of change; the system is regulated through feedback about cognitive effort. We show how different levels of effort affect both the ability to learn a task, and to execute it.

  4. Autonomous navigation system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  5. Convergent method of and apparatus for distributed control of robotic systems using fuzzy logic

    DOEpatents

    Feddema, John T.; Driessen, Brian J.; Kwok, Kwan S.

    2002-01-01

    A decentralized fuzzy logic control system for one vehicle or for multiple robotic vehicles provides a way to control each vehicle to converge on a goal without collisions between vehicles or collisions with other obstacles, in the presence of noisy input measurements and a limited amount of compute-power and memory on board each robotic vehicle. The fuzzy controller demonstrates improved robustness to noise relative to an exact controller.

  6. A representation for error detection and recovery in robot task plans

    NASA Technical Reports Server (NTRS)

    Lyons, D. M.; Vijaykumar, R.; Venkataraman, S. T.

    1990-01-01

    A general definition is given of the problem of error detection and recovery in robot assembly systems, and a general representation is developed for dealing with the problem. This invariant representation involves a monitoring process which is concurrent, with one monitor per task plan. A plan hierarchy is discussed, showing how diagnosis and recovery can be handled using the representation.

  7. New spatial clustering-based models for optimal urban facility location considering geographical obstacles

    NASA Astrophysics Data System (ADS)

    Javadi, Maryam; Shahrabi, Jamal

    2014-03-01

    The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising

  8. Hyperspectral Imaging and Obstacle Detection for Robotics Navigation

    DTIC Science & Technology

    2005-09-01

    anatomy and diffraction process. 17 3.3 Technical Specifications of the System A. Brimrose AOTF Video Adaptor Specifications: Material TeO2 Active...sampled from glass case on person 2’s belt 530 pixels 20 pick-up white sampled from body panels of pick-up 600 pixels 21 pick-up blue sampled from

  9. 3D change detection in staggered voxels model for robotic sensing and navigation

    NASA Astrophysics Data System (ADS)

    Liu, Ruixu; Hampshire, Brandon; Asari, Vijayan K.

    2016-05-01

    3D scene change detection is a challenging problem in robotic sensing and navigation. There are several unpredictable aspects in performing scene change detection. A change detection method which can support various applications in varying environmental conditions is proposed. Point cloud models are acquired from a RGB-D sensor, which provides the required color and depth information. Change detection is performed on robot view point cloud model. A bilateral filter smooths the surface and fills the holes as well as keeps the edge details on depth image. Registration of the point cloud model is implemented by using Random Sample Consensus (RANSAC) algorithm. It uses surface normal as the previous stage for the ground and wall estimate. After preprocessing the data, we create a point voxel model which defines voxel as surface or free space. Then we create a color model which defines each voxel that has a color by the mean of all points' color value in this voxel. The preliminary change detection is detected by XOR subtract on the point voxel model. Next, the eight neighbors for this center voxel are defined. If they are neither all `changed' voxels nor all `no changed' voxels, a histogram of location and hue channel color is estimated. The experimental evaluations performed to evaluate the capability of our algorithm show promising results for novel change detection that indicate all the changing objects with very limited false alarm rate.

  10. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center's Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of "dull, dirty or dangerous" tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center's newest development areas can provide crew with low mass exercise capability and also augment an astronaut's strength while wearing a space suit. This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center's Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  11. Design of a Micro Cable Tunnel Inspection Robot

    NASA Astrophysics Data System (ADS)

    Song, Wei; Liu, Lei; Zhou, Xiaolong; Wang, Chengjiang

    2016-11-01

    As the ventilation system in cable tunnel is not perfect and the environment is closed, it is easy to accumulate toxic and harmful gas. It is a serious threat to the life safety of inspection staff. Therefore, a micro cable tunnel inspection robot is designed. The whole design plan mainly includes two parts: mechanical structure design and control system design. According to the functional requirements of the tunnel inspection robot, a wheel arm structure with crawler type is proposed. Some sensors are used to collect temperature, gas and image and transmit the information to the host computer in real time. The result shows the robot with crawler wheel arm structure has the advantages of small volume, quick action and high performance-price ratio. Besides, it has high obstacle crossing and avoidance ability and can adapt to a variety of complex cable tunnel environment.

  12. Fuzzy logic based robotic controller

    NASA Technical Reports Server (NTRS)

    Attia, F.; Upadhyaya, M.

    1994-01-01

    Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.

  13. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  14. Obstacle detectors for automated transit vehicles: A technoeconomic and market analysis

    NASA Technical Reports Server (NTRS)

    Lockerby, C. E.

    1979-01-01

    A search was conducted to identify the technical and economic characteristics of both NASA and nonNASA obstacle detectors. The findings, along with market information were compiled and analyzed for consideration by DOT and NASA in decisions about any future automated transit vehicle obstacle detector research, development, or applications project. Currently available obstacle detectors and systems under development are identified by type (sonic, capacitance, infrared/optical, guided radar, and probe contact) and compared with the three NASA devices selected as possible improvements or solutions to the problems in existing obstacle detection systems. Cost analyses and market forecasts individually for the AGT and AMTV markets are included.

  15. Mobility of lightweight robots over snow

    NASA Astrophysics Data System (ADS)

    Lever, James H.; Shoop, Sally A.

    2006-05-01

    Snowfields are challenging terrain for lightweight (<50 kg) unmanned ground vehicles. Deep sinkage, high snowcompaction resistance, traction loss while turning and ingestion of snow into the drive train can cause immobility within a few meters of travel. However, for suitably designed vehicles, deep snow offers a smooth, uniform surface that can obliterate obstacles. Key requirements for good over-snow mobility are low ground pressure, large clearance relative to vehicle size and a drive system that tolerates cohesive snow. A small robot will invariably encounter deep snow relative to its ground clearance. Because a single snowstorm can easily deposit 30 cm of fresh snow, robots with ground clearance less than about 10 cm must travel over the snow rather than gain support from the underlying ground. This can be accomplished using low-pressure tracks (< 1.5 kPa). Even still, snow-compaction resistance can exceed 20% of vehicle weight. Also, despite relatively high traction coefficients for low track pressures, differential or skid steering is difficult because the outboard track can easily break traction as the vehicle attempts to turn against the snow. Short track lengths (relative to track separation) or coupled articulated robots offer steering solutions for deep snow. This paper presents preliminary guidance to design lightweight robots for good mobility over snow based on mobility theory and tests of PackBot, Talon and SnoBot, a custom-designed research robot. Because many other considerations constrain robot designs, this guidance can help with development of winterization kits to improve the over-snow performance of existing robots.

  16. Curb Mounting, Vertical Mobility, and Inverted Mobility on Rough Surfaces Using Microspine-Enabled Robots

    NASA Technical Reports Server (NTRS)

    Parness, Aaron

    2012-01-01

    Three robots that extend microspine technology to enable advanced mobility are presented. First, the Durable Reconnaissance and Observation Platform (DROP) and the ReconRobotics Scout platform use a new rotary configuration of microspines to provide improved soldier-portable reconnaissance by moving rapidly over curbs and obstacles, transitioning from horizontal to vertical surfaces, climbing rough walls and surviving impacts. Next, the four-legged LEMUR robot uses new configurations of opposed microspines to anchor to both manmade and natural rough surfaces. Using these anchors as feet enables mobility in unstructured environments, from urban disaster areas to deserts and caves.

  17. Virtual local target method for avoiding local minimum in potential field based robot navigation.

    PubMed

    Zou, Xi-Yong; Zhu, Jing

    2003-01-01

    A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments.

  18. 3D Printed Wearable Sensors with Liquid Metals for the Pose Detection of Snakelike Soft Robots.

    PubMed

    Zhou, Luyu; Gao, Qing; Zhan, Jun-Fu; Xie, Chao-Qi; Fu, Jianzhong; He, Yong

    2018-06-18

    Liquid metal-based flexible sensors, which utilize advanced liquid conductive material to serve as sensitive element, is emerging as a promising solution to measure large deformations. Nowadays, one of the biggest challenges for precise control of soft robots is the detection of their real time positions. Existing fabrication methods are unable to fabricate flexible sensors that match the shape of soft robots. In this report, we firstly described a novel 3D printed multi-function inductance flexible and stretchable sensor with liquid metals (LMs), which is capable of measuring both axial tension and curvature. This sensor is fabricated with a developed coaxial liquid metal 3D printer by co-printing of silicone rubber and LMs. Due to the solenoid shape, this sensor can be easily installed on snakelike soft robots and can accurately distinguish different degrees of tensile and bending deformation. We determined the structural parameters of the sensor and proved its excellent stability and reliability. As a demonstration, we used this sensor to measure the curvature of a finger and feedback the position of endoscope, a typical snakelike structure. Because of its bending deformation form consistent with the actual working status of the soft robot and unique shape, this sensor has better practical application prospects in the pose detection.

  19. Using sensor habituation in mobile robots to reduce oscillatory movements in narrow corridors.

    PubMed

    Chang, Carolina

    2005-11-01

    Habituation is a form of nonassociative learning observed in a variety of species of animals. Arguably, it is the simplest form of learning. Nonetheless, the ability to habituate to certain stimuli implies plastic neural systems and adaptive behaviors. This paper describes how computational models of habituation can be applied to real robots. In particular, we discuss the problem of the oscillatory movements observed when a Khepera robot navigates through narrow hallways using a biologically inspired neurocontroller. Results show that habituation to the proximity of the walls can lead to smoother navigation. Habituation to sensory stimulation to the sides of the robot does not interfere with the robot's ability to turn at dead ends and to avoid obstacles outside the hallway. This paper shows that simple biological mechanisms of learning can be adapted to achieve better performance in real mobile robots.

  20. Vision Sensor-Based Road Detection for Field Robot Navigation

    PubMed Central

    Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen

    2015-01-01

    Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514

  1. Generating human-like movements on an anthropomorphic robot using an interior point method

    NASA Astrophysics Data System (ADS)

    Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.

    2013-10-01

    In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.

  2. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  3. Concurrent planning and execution for a walking robot

    NASA Astrophysics Data System (ADS)

    Simmons, Reid

    1990-07-01

    The Planetary Rover project is developing the Ambler, a novel legged robot, and an autonomous software system for walking the Ambler over rough terrain. As part of the project, we have developed a system that integrates perception, planning, and real-time control to navigate a single leg of the robot through complex obstacle courses. The system is integrated using the Task Control Architecture (TCA), a general-purpose set of utilities for building and controlling distributed mobile robot systems. The walking system, as originally implemented, utilized a sequential sense-plan-act control cycle. This report describes efforts to improve the performance of the system by concurrently planning and executing steps. Concurrency was achieved by modifying the existing sequential system to utilize TCA features such as resource management, monitors, temporal constraints, and hierarchical task trees. Performance was increased in excess of 30 percent with only a relatively modest effort to convert and test the system. The results lend support to the utility of using TCA to develop complex mobile robot systems.

  4. Flight data acquisition methodology for validation of passive ranging algorithms for obstacle avoidance

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1990-01-01

    The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.

  5. A Real-Time Reaction Obstacle Avoidance Algorithm for Autonomous Underwater Vehicles in Unknown Environments.

    PubMed

    Yan, Zheping; Li, Jiyun; Zhang, Gengshi; Wu, Yi

    2018-02-02

    A novel real-time reaction obstacle avoidance algorithm (RRA) is proposed for autonomous underwater vehicles (AUVs) that must adapt to unknown complex terrains, based on forward looking sonar (FLS). To accomplish this algorithm, obstacle avoidance rules are planned, and the RRA processes are split into five steps Introduction only lists 4 so AUVs can rapidly respond to various environment obstacles. The largest polar angle algorithm (LPAA) is designed to change detected obstacle's irregular outline into a convex polygon, which simplifies the obstacle avoidance process. A solution is designed to solve the trapping problem existing in U-shape obstacle avoidance by an outline memory algorithm. Finally, simulations in three unknown obstacle scenes are carried out to demonstrate the performance of this algorithm, where the obtained obstacle avoidance trajectories are safety, smooth and near-optimal.

  6. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center’s Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of “dull, dirty or dangerous” tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center’s newest development areas can provide crew with low mass exercise capability and also augment an astronaut’s strength while wearing a space suit.This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center’s Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  7. IntelliTable: Inclusively-Designed Furniture with Robotic Capabilities.

    PubMed

    Prescott, Tony J; Conran, Sebastian; Mitchinson, Ben; Cudd, Peter

    2017-01-01

    IntelliTable is a new proof-of-principle assistive technology system with robotic capabilities in the form of an elegant universal cantilever table able to move around by itself, or under user control. We describe the design and current capabilities of the table and the human-centered design methodology used in its development and initial evaluation. The IntelliTable study has delivered robotic platform programmed by a smartphone that can navigate around a typical home or care environment, avoiding obstacles, and positioning itself at the user's command. It can also be configured to navigate itself to pre-ordained places positions within an environment using ceiling tracking, responsive optical guidance and object-based sonar navigation.

  8. A novel traveling wave piezoelectric actuated tracked mobile robot utilizing friction effect

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Shu, Chengyou; Jin, Jiamei; Zhang, Jianhui

    2017-03-01

    A novel traveling wave piezoelectric-actuated tracked mobile robot with potential application to robotic rovers was proposed and investigated in this study. The proposed tracked mobile robot is composed of a parallelogram-frame-structure piezoelectric transducer with four rings and a metal track. Utilizing the converse piezoelectric and friction effects, traveling waves were propagated in the rings and then the metal track was actuated by the piezoelectric transducer. Compared with traditional tracked mechanisms, the proposed tracked mobile robot has a simpler and more compact structure without lubricant, which eliminates the problem of lubricant volatilization and deflation, thus, it could be operated in the vacuum environment. Dynamic characteristics were simulated and measured to reveal the mechanism of actuating track of the piezoelectric transducer. Experimental investigations of the traveling wave piezoelectric-actuated tracked mobile robot were then carried out, and the results indicated that the robot prototype with a pair of exciting voltages of 460 Vpp is able to achieve a maximum velocity of 57 mm s-1 moving on the foam plate and possesses the obstacle crossing capability with a maximum height of 27 mm. The proposed tracked mobile robot exhibits potential to be the driving system of robotic rovers.

  9. Speed control for a mobile robot

    NASA Astrophysics Data System (ADS)

    Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.

  10. Time-of-flight-assisted Kinect camera-based people detection for intuitive human robot cooperation in the surgical operating room.

    PubMed

    Beyl, Tim; Nicolai, Philip; Comparetti, Mirko D; Raczkowsky, Jörg; De Momi, Elena; Wörn, Heinz

    2016-07-01

    Scene supervision is a major tool to make medical robots safer and more intuitive. The paper shows an approach to efficiently use 3D cameras within the surgical operating room to enable for safe human robot interaction and action perception. Additionally the presented approach aims to make 3D camera-based scene supervision more reliable and accurate. A camera system composed of multiple Kinect and time-of-flight cameras has been designed, implemented and calibrated. Calibration and object detection as well as people tracking methods have been designed and evaluated. The camera system shows a good registration accuracy of 0.05 m. The tracking of humans is reliable and accurate and has been evaluated in an experimental setup using operating clothing. The robot detection shows an error of around 0.04 m. The robustness and accuracy of the approach allow for an integration into modern operating room. The data output can be used directly for situation and workflow detection as well as collision avoidance.

  11. Cosine Kuramoto Based Distribution of a Convoy with Limit-Cycle Obstacle Avoidance Through the Use of Simulated Agents

    NASA Astrophysics Data System (ADS)

    Howerton, William

    This thesis presents a method for the integration of complex network control algorithms with localized agent specific algorithms for maneuvering and obstacle avoidance. This method allows for successful implementation of group and agent specific behaviors. It has proven to be robust and will work for a variety of vehicle platforms. Initially, a review and implementation of two specific algorithms will be detailed. The first, a modified Kuramoto model was developed by Xu [1] which utilizes tools from graph theory to efficiently perform the task of distributing agents. The second algorithm developed by Kim [2] is an effective method for wheeled robots to avoid local obstacles using a limit-cycle navigation method. The results of implementing these methods on a test-bed of wheeled robots will be presented. Control issues related to outside disturbances not anticipated in the original theory are then discussed. A novel method of using simulated agents to separate the task of distributing agents from agent specific velocity and heading commands has been developed and implemented to address these issues. This new method can be used to combine various behaviors and is not limited to a specific control algorithm.

  12. Controlling Herds of Cooperative Robots

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.

    2006-01-01

    A document poses, and suggests a program of research for answering, questions of how to achieve autonomous operation of herds of cooperative robots to be used in exploration and/or colonization of remote planets. In a typical scenario, a flock of mobile sensory robots would be deployed in a previously unexplored region, one of the robots would be designated the leader, and the leader would issue commands to move the robots to different locations or aim sensors at different targets to maximize scientific return. It would be necessary to provide for this hierarchical, cooperative behavior even in the face of such unpredictable factors as terrain obstacles. A potential-fields approach is proposed as a theoretical basis for developing methods of autonomous command and guidance of a herd. A survival-of-the-fittest approach is suggested as a theoretical basis for selection, mutation, and adaptation of a description of (1) the body, joints, sensors, actuators, and control computer of each robot, and (2) the connectivity of each robot with the rest of the herd, such that the herd could be regarded as consisting of a set of artificial creatures that evolve to adapt to a previously unknown environment. A distributed simulation environment has been developed to test the proposed approaches in the Titan environment. One blimp guides three surface sondes via a potential field approach. The results of the simulation demonstrate that the method used for control is feasible, even if significant uncertainty exists in the dynamics and environmental models, and that the control architecture provides the autonomy needed to enable surface science data collection.

  13. Control of humanoid robot via motion-onset visual evoked potentials

    PubMed Central

    Li, Wei; Li, Mengfan; Zhao, Jing

    2015-01-01

    This paper investigates controlling humanoid robot behavior via motion-onset specific N200 potentials. In this study, N200 potentials are induced by moving a blue bar through robot images intuitively representing robot behaviors to be controlled with mind. We present the individual impact of each subject on N200 potentials and discuss how to deal with individuality to obtain a high accuracy. The study results document the off-line average accuracy of 93% for hitting targets across over five subjects, so we use this major component of the motion-onset visual evoked potential (mVEP) to code people's mental activities and to perform two types of on-line operation tasks: navigating a humanoid robot in an office environment with an obstacle and picking-up an object. We discuss the factors that affect the on-line control success rate and the total time for completing an on-line operation task. PMID:25620918

  14. Concrete bridge deck early problem detection and mitigation using robotics

    NASA Astrophysics Data System (ADS)

    Gucunski, Nenad; Yi, Jingang; Basily, Basily; Duong, Trung; Kim, Jinyoung; Balaguru, Perumalsamy; Parvardeh, Hooman; Maher, Ali; Najm, Husam

    2015-04-01

    More economical management of bridges can be achieved through early problem detection and mitigation. The paper describes development and implementation of two fully automated (robotic) systems for nondestructive evaluation (NDE) and minimally invasive rehabilitation of concrete bridge decks. The NDE system named RABIT was developed with the support from Federal Highway Administration (FHWA). It implements multiple NDE technologies, namely: electrical resistivity (ER), impact echo (IE), ground-penetrating radar (GPR), and ultrasonic surface waves (USW). In addition, the system utilizes advanced vision to substitute traditional visual inspection. The RABIT system collects data at significantly higher speeds than it is done using traditional NDE equipment. The associated platform for the enhanced interpretation of condition assessment in concrete bridge decks utilizes data integration, fusion, and deterioration and defect visualization. The interpretation and visualization platform specifically addresses data integration and fusion from the four NDE technologies. The data visualization platform facilitates an intuitive presentation of the main deterioration due to: corrosion, delamination, and concrete degradation, by integrating NDE survey results and high resolution deck surface imaging. The rehabilitation robotic system was developed with the support from National Institute of Standards and Technology-Technology Innovation Program (NIST-TIP). The system utilizes advanced robotics and novel materials to repair problems in concrete decks, primarily early stage delamination and internal cracking, using a minimally invasive approach. Since both systems use global positioning systems for navigation, some of the current efforts concentrate on their coordination for the most effective joint evaluation and rehabilitation.

  15. Control of a robot dinosaur

    PubMed Central

    Papantoniou, V.

    1999-01-01

    The Palaiomation Consortium, supported by the European Commission, is building a robot Iguanodon atherfieldensis for museum display that is much more sophisticated than existing animatronic exhibits. The current half-size (2.5 m) prototype is fully autonomous, carrying its own computer and batteries. It walks around the room, choosing its own path and avoiding obstacles. A bigger version with a larger repertoire of behaviours is planned. Many design problems have had to be overcome. A real dinosaur would have had hundreds of muscles, and we have had to devise means of achieving life-like movement with a much smaller number of motors; we have limited ourselves to 20, to keep the control problems manageable. Realistic stance requires a narrower trackway and a higher centre of mass than in previous (often spider-like) legged robots, making it more difficult to maintain stability. Other important differences from previous walking robots are that the forelegs have to be shorter than the hind, and the machinery has had to be designed to fit inside a realistically shaped body shell. Battery life is about one hour, but to achieve this we have had to design the robot to have very low power consumption. Currently, this limits it to unrealistically slow movement. The control system includes a high-level instructions processor, a gait generator, a motion-coordination generator, and a kinematic model.

  16. Landmark navigation and autonomous landing approach with obstacle detection for aircraft

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.

    1997-06-01

    A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.

  17. Passive appendages improve the maneuverability of fish-like robots

    NASA Astrophysics Data System (ADS)

    Pollard, Beau; Tallapragada, Phanindra

    2017-11-01

    It is known that the passive mechanics of fish appendages play a role in the high efficiency of their swimming. A well known example of this is the experimental demonstration that a dead fish could swim upstream. However little is known about the role if any of passive deformations of a fish-like body that could aid in its maneuverability. Part of the difficulty investigating this lies in clearly separating the role of actuated body deformations and passive deformations in response to the fluid structure interaction. In this paper we compare the maneuverability of several fish shaped robotic models that possess varying numbers of passive appendages with a fish shaped robot that has no appendages. All the robots are propelled by the oscillations of an internal momentum wheel thereby eliminating any active deformations of the body. Our experiments clearly reveal the significant improvement in maneuverability of robots with passive appendages. In the broader context of swimming robots our experiments show that passive mechanisms could be useful to provide mechanical feedback that can help maneuverability and obstacle avoidance along with propulsive efficiency. This work was partly supported by a Grant from the NSF CMMI 1563315.

  18. Detection of Water Hazards for Autonomous Robotic Vehicles

    NASA Technical Reports Server (NTRS)

    Matthes, Larry; Belluta, Paolo; McHenry, Michael

    2006-01-01

    Four methods of detection of bodies of water are under development as means to enable autonomous robotic ground vehicles to avoid water hazards when traversing off-road terrain. The methods involve processing of digitized outputs of optoelectronic sensors aboard the vehicles. It is planned to implement these methods in hardware and software that would operate in conjunction with the hardware and software for navigation and for avoidance of solid terrain obstacles and hazards. The first method, intended for use during the day, is based on the observation that, under most off-road conditions, reflections of sky from water are easily discriminated from the adjacent terrain by their color and brightness, regardless of the weather and of the state of surface waves on the water. Accordingly, this method involves collection of color imagery by a video camera and processing of the image data by an algorithm that classifies each pixel as soil, water, or vegetation according to its color and brightness values (see figure). Among the issues that arise is the fact that in the presence of reflections of objects on the opposite shore, it is difficult to distinguish water by color and brightness alone. Another issue is that once a body of water has been identified by means of color and brightness, its boundary must be mapped for use in navigation. Techniques for addressing these issues are under investigation. The second method, which is not limited by time of day, is based on the observation that ladar returns from bodies of water are usually too weak to be detected. In this method, ladar scans of the terrain are analyzed for returns and the absence thereof. In appropriate regions, the presence of water can be inferred from the absence of returns. Under some conditions in which reflections from the bottom are detectable, ladar returns could, in principle, be used to determine depth. The third method involves the recognition of bodies of water as dark areas in short

  19. Application of particle swarm optimization in path planning of mobile robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Cai, Feng; Wang, Ying

    2017-08-01

    In order to realize the optimal path planning of mobile robot in unknown environment, a particle swarm optimization algorithm based on path length as fitness function is proposed. The location of the global optimal particle is determined by the minimum fitness value, and the robot moves along the points of the optimal particles to the target position. The process of moving to the target point is done with MATLAB R2014a. Compared with the standard particle swarm optimization algorithm, the simulation results show that this method can effectively avoid all obstacles and get the optimal path.

  20. Scalability of Robotic Controllers: Effects of Progressive Autonomy on Intelligence, Surveillance, and Reconnaissance Robotic Tasks

    DTIC Science & Technology

    2012-09-01

    away from the MOCU. The semi-autonomous mode was preferred over the teleoperated mode for multitasking , maintaining SA, avoiding obstacles, and...0 23 Software with icons 0 0 0 0 2 25 Pull-down menu * 0 0 0 0 3 24 Graphics/drawing features in software packages* 3 8 1 4 3 8 Email 1 0 0 0 1...r. Navigate to the next waypoint or set of hash lines 5.27 5.08 6.25 s. Ability to multitask (operate/monitor robot and communicate on the radio

  1. Optimal motion planning for collision avoidance of mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An optimal control formulation of the problem of collision avoidance of mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the minimum distance between the robot and the object is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. A perturbation control type of approach is used to update the optimal plan. Simulation results verify the value of the proposed strategy.

  2. Report on First International Workshop on Robotic Surgery in Thoracic Oncology.

    PubMed

    Veronesi, Giulia; Cerfolio, Robert; Cingolani, Roberto; Rueckert, Jens C; Soler, Luc; Toker, Alper; Cariboni, Umberto; Bottoni, Edoardo; Fumagalli, Uberto; Melfi, Franca; Milli, Carlo; Novellis, Pierluigi; Voulaz, Emanuele; Alloisio, Marco

    2016-01-01

    A workshop of experts from France, Germany, Italy, and the United States took place at Humanitas Research Hospital Milan, Italy, on February 10 and 11, 2016, to examine techniques for and applications of robotic surgery to thoracic oncology. The main topics of presentation and discussion were robotic surgery for lung resection; robot-assisted thymectomy; minimally invasive surgery for esophageal cancer; new developments in computer-assisted surgery and medical applications of robots; the challenge of costs; and future clinical research in robotic thoracic surgery. The following article summarizes the main contributions to the workshop. The Workshop consensus was that since video-assisted thoracoscopic surgery (VATS) is becoming the mainstream approach to resectable lung cancer in North America and Europe, robotic surgery for thoracic oncology is likely to be embraced by an increasing numbers of thoracic surgeons, since it has technical advantages over VATS, including intuitive movements, tremor filtration, more degrees of manipulative freedom, motion scaling, and high-definition stereoscopic vision. These advantages may make robotic surgery more accessible than VATS to trainees and experienced surgeons and also lead to expanded indications. However, the high costs of robotic surgery and absence of tactile feedback remain obstacles to widespread dissemination. A prospective multicentric randomized trial (NCT02804893) to compare robotic and VATS approaches to stages I and II lung cancer will start shortly.

  3. Development of a mobile robot for the 1995 AUVS competition

    NASA Astrophysics Data System (ADS)

    Matthews, Bradley O.; Ruthemeyer, Michael A.; Perdue, David; Hall, Ernest L.

    1995-12-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors systems. The speed and steering control are supervised by a 486 computer through a 3-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. The is micro-controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system, where even computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected through a commercial tracking device, communicating with the computer the X,Y coordinates of the lane marker. Testing of these systems yielded positive results by showing that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous controller applicable for any mobile vehicle with only minor adaptations.

  4. Ultrasonic detection technology based on joint robot on composite component with complex surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Juan; Xu, Chunguang; Zhang, Lan

    Some components have complex surface, such as the airplane wing and the shell of a pressure vessel etc. The quality of these components determines the reliability and safety of related equipment. Ultrasonic nondestructive detection is one of the main methods used for testing material defects at present. In order to improve the testing precision, the acoustic axis of the ultrasonic transducer should be consistent with the normal direction of the measured points. When we use joint robots, automatic ultrasonic scan along the component surface normal direction can be realized by motion trajectory planning and coordinate transformation etc. In order tomore » express the defects accurately and truly, the robot position and the signal of the ultrasonic transducer should be synchronized.« less

  5. Prevention 0f Unwanted Free-Declaration of Static Obstacles in Probability Occupancy Grids

    NASA Astrophysics Data System (ADS)

    Krause, Stefan; Scholz, M.; Hohmann, R.

    2017-10-01

    Obstacle detection and avoidance are major research fields in unmanned aviation. Map based obstacle detection approaches often use discrete world representations such as probabilistic grid maps to fuse incremental environment data from different views or sensors to build a comprehensive representation. The integration of continuous measurements into a discrete representation can result in rounding errors which, in turn, leads to differences between the artificial model and real environment. The cause of these deviations is a low spatial resolution of the world representation comparison to the used sensor data. Differences between artificial representations which are used for path planning or obstacle avoidance and the real world can lead to unexpected behavior up to collisions with unmapped obstacles. This paper presents three approaches to the treatment of errors that can occur during the integration of continuous laser measurement in the discrete probabilistic grid. Further, the quality of the error prevention and the processing performance are compared with real sensor data.

  6. Robotic Astrobiology: Searching for Life with Rovers

    NASA Astrophysics Data System (ADS)

    Cabrol, N. A.; Wettergreen, D. S.; Team, L.

    2006-05-01

    The Life In The Atacama (LITA) project has developed and field tested a long-range, solar-powered, automated rover platform (Zoe) and a science payload assembled to search for microbial life in the Atacama desert. Life is hardly detectable over most of the extent of the driest desert on Earth. Its geological, climatic, and biological evolution provides a unique training ground for designing and testing exploration strategies and life detection methods for the robotic search for life on Mars. LITA opens the path to a new generation of rover missions that will transition from the current study of habitability (MER) to the upcoming search for, and study of, habitats and life on Mars. Zoe's science payload reflects this transition by combining complementary elements, some directed towards the remote sensing of the environment (geology, morphology, mineralogy, weather/climate) for the detection of conditions favorable to microbial habitats and oases along survey traverses, others directed toward the in situ detection of life' signatures (biological and physical, such as biological constructs and patterns). New exploration strategies specifically adapted to the search for microbial life were designed and successfully tested in the Atacama between 2003-2005. They required the development and implementation in the field of new technological capabilities, including navigation beyond the horizon, obstacle avoidance, and "science-on-the-fly" (automated detection of targets of science value), and that of new rover planning tools in the remote science operation center.

  7. A Hybrid Robotic Control System Using Neuroblastoma Cultures

    NASA Astrophysics Data System (ADS)

    Ferrández, J. M.; Lorente, V.; Cuadra, J. M.; Delapaz, F.; Álvarez-Sánchez, José Ramón; Fernández, E.

    The main objective of this work is to analyze the computing capabilities of human neuroblastoma cultured cells and to define connection schemes for controlling a robot behavior. Multielectrode Array (MEA) setups have been designed for direct culturing neural cells over silicon or glass substrates, providing the capability to stimulate and record simultaneously populations of neural cells. This paper describes the process of growing human neuroblastoma cells over MEA substrates and tries to modulate the natural physiologic responses of these cells by tetanic stimulation of the culture. We show that the large neuroblastoma networks developed in cultured MEAs are capable of learning: establishing numerous and dynamic connections, with modifiability induced by external stimuli and we propose an hybrid system for controlling a robot to avoid obstacles.

  8. A Real-Time Reaction Obstacle Avoidance Algorithm for Autonomous Underwater Vehicles in Unknown Environments

    PubMed Central

    Yan, Zheping; Li, Jiyun; Zhang, Gengshi; Wu, Yi

    2018-01-01

    A novel real-time reaction obstacle avoidance algorithm (RRA) is proposed for autonomous underwater vehicles (AUVs) that must adapt to unknown complex terrains, based on forward looking sonar (FLS). To accomplish this algorithm, obstacle avoidance rules are planned, and the RRA processes are split into five steps Introduction only lists 4 so AUVs can rapidly respond to various environment obstacles. The largest polar angle algorithm (LPAA) is designed to change detected obstacle’s irregular outline into a convex polygon, which simplifies the obstacle avoidance process. A solution is designed to solve the trapping problem existing in U-shape obstacle avoidance by an outline memory algorithm. Finally, simulations in three unknown obstacle scenes are carried out to demonstrate the performance of this algorithm, where the obtained obstacle avoidance trajectories are safety, smooth and near-optimal. PMID:29393915

  9. Robot learning and error correction

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  10. Remote-controlled robotic platform ORPHEUS as a new tool for detection of bacteria in the environment.

    PubMed

    Nejdl, Lukas; Kudr, Jiri; Cihalova, Kristyna; Chudobova, Dagmar; Zurek, Michal; Zalud, Ludek; Kopecny, Lukas; Burian, Frantisek; Ruttkay-Nedecky, Branislav; Krizkova, Sona; Konecna, Marie; Hynek, David; Kopel, Pavel; Prasek, Jan; Adam, Vojtech; Kizek, Rene

    2014-08-01

    Remote-controlled robotic systems are being used for analysis of various types of analytes in hostile environment including those called extraterrestrial. The aim of our study was to develop a remote-controlled robotic platform (ORPHEUS-HOPE) for bacterial detection. For the platform ORPHEUS-HOPE a 3D printed flow chip was designed and created with a culture chamber with volume 600 μL. The flow rate was optimized to 500 μL/min. The chip was tested primarily for detection of 1-naphthol by differential pulse voltammetry with detection limit (S/N = 3) as 20 nM. Further, the way how to capture bacteria was optimized. To capture bacterial cells (Staphylococcus aureus), maghemite nanoparticles (1 mg/mL) were prepared and modified with collagen, glucose, graphene, gold, hyaluronic acid, and graphene with gold or graphene with glucose (20 mg/mL). The most up to 50% of the bacteria were captured by graphene nanoparticles modified with glucose. The detection limit of the whole assay, which included capturing of bacteria and their detection under remote control operation, was estimated as 30 bacteria per μL. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Awareness and Detection of Traffic and Obstacles Using Synthetic and Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.

    2012-01-01

    Research literature are reviewed and summarized to evaluate the awareness and detection of traffic and obstacles when using Synthetic Vision Systems (SVS) and Enhanced Vision Systems (EVS). The study identifies the critical issues influencing the time required, accuracy, and pilot workload associated with recognizing and reacting to potential collisions or conflicts with other aircraft, vehicles and obstructions during approach, landing, and surface operations. This work considers the effect of head-down display and head-up display implementations of SVS and EVS as well as the influence of single and dual pilot operations. The influences and strategies of adding traffic information and cockpit alerting with SVS and EVS were also included. Based on this review, a knowledge gap assessment was made with recommendations for ground and flight testing to fill these gaps and hence, promote the safe and effective implementation of SVS/EVS technologies for the Next Generation Air Transportation System

  12. Constrained motion model of mobile robots and its applications.

    PubMed

    Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong

    2009-06-01

    Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.

  13. Obstacle Avoidance, Visual Detection Performance, and Eye-Scanning Behavior of Glaucoma Patients in a Driving Simulator: A Preliminary Study

    PubMed Central

    Prado Vega, Rocío; van Leeuwen, Peter M.; Rendón Vélez, Elizabeth; Lemij, Hans G.; de Winter, Joost C. F.

    2013-01-01

    The objective of this study was to evaluate differences in driving performance, visual detection performance, and eye-scanning behavior between glaucoma patients and control participants without glaucoma. Glaucoma patients (n = 23) and control participants (n = 12) completed four 5-min driving sessions in a simulator. The participants were instructed to maintain the car in the right lane of a two-lane highway while their speed was automatically maintained at 100 km/h. Additional tasks per session were: Session 1: none, Session 2: verbalization of projected letters, Session 3: avoidance of static obstacles, and Session 4: combined letter verbalization and avoidance of static obstacles. Eye-scanning behavior was recorded with an eye-tracker. Results showed no statistically significant differences between patients and control participants for lane keeping, obstacle avoidance, and eye-scanning behavior. Steering activity, number of missed letters, and letter reaction time were significantly higher for glaucoma patients than for control participants. In conclusion, glaucoma patients were able to avoid objects and maintain a nominal lane keeping performance, but applied more steering input than control participants, and were more likely than control participants to miss peripherally projected stimuli. The eye-tracking results suggest that glaucoma patients did not use extra visual search to compensate for their visual field loss. Limitations of the study, such as small sample size, are discussed. PMID:24146975

  14. Real-time 3D ultrasound guidance of autonomous surgical robot for shrapnel detection and breast biopsy

    NASA Astrophysics Data System (ADS)

    Rogers, Albert J.; Light, Edward D.; von Allmen, Daniel; Smith, Stephen W.

    2009-02-01

    Two studies have been conducted using real time 3D ultrasound and an automated robot system for carrying out surgical tasks. The first task is to perform a breast lesion biopsy automatically after detection by ultrasound. Combining 3D ultrasound with traditional mammography allows real time guidance of the biopsy needle. Image processing techniques analyze volumes to calculate the location of a target lesion. This position was converted into the coordinate system of a three axis robot which moved a needle probe to touch the lesion. The second task is to remove shrapnel from a tissue phantom autonomously. In some emergency situations, shrapnel detection in the body is necessary for quick treatment. Furthermore, small or uneven shrapnel geometry may hinder location by typical ultrasound imaging methods. Vibrations and small displacements can be induced in ferromagnetic shrapnel by a variable electromagnet. We used real time 3D color Doppler to locate this motion for 2 mm long needle fragments and determined the 3D position of the fragment in the scanner coordinates. The rms error of the image guided robot for 5 trials was 1.06 mm for this task which was accomplished in 76 seconds.

  15. Smart mobile robot system for rubbish collection

    NASA Astrophysics Data System (ADS)

    Ali, Mohammed A. H.; Sien Siang, Tan

    2018-03-01

    This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.

  16. Robustly stable adaptive control of a tandem of master-slave robotic manipulators with force reflection by using a multiestimation scheme.

    PubMed

    Ibeas, Asier; de la Sen, Manuel

    2006-10-01

    The problem of controlling a tandem of robotic manipulators composing a teleoperation system with force reflection is addressed in this paper. The final objective of this paper is twofold: 1) to design a robust control law capable of ensuring closed-loop stability for robots with uncertainties and 2) to use the so-obtained control law to improve the tracking of each robot to its corresponding reference model in comparison with previously existing controllers when the slave is interacting with the obstacle. In this way, a multiestimation-based adaptive controller is proposed. Thus, the master robot is able to follow more accurately the constrained motion defined by the slave when interacting with an obstacle than when a single-estimation-based controller is used, improving the transparency property of the teleoperation scheme. The closed-loop stability is guaranteed if a minimum residence time, which might be updated online when unknown, between different controller parameterizations is respected. Furthermore, the analysis of the teleoperation and stability capabilities of the overall scheme is carried out. Finally, some simulation examples showing the working of the multiestimation scheme complete this paper.

  17. Applications of artificial intelligence to space station and automated software techniques: High level robot command language

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1989-01-01

    The objective is to develop a system that will allow a person not necessarily skilled in the art of programming robots to quickly and naturally create the necessary data and commands to enable a robot to perform a desired task. The system will use a menu driven graphical user interface. This interface will allow the user to input data to select objects to be moved. There will be an imbedded expert system to process the knowledge about objects and the robot to determine how they are to be moved. There will be automatic path planning to avoid obstacles in the work space and to create a near optimum path. The system will contain the software to generate the required robot instructions.

  18. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  19. Dynamic whole-body robotic manipulation

    NASA Astrophysics Data System (ADS)

    Abe, Yeuhi; Stephens, Benjamin; Murphy, Michael P.; Rizzi, Alfred A.

    2013-05-01

    The creation of dynamic manipulation behaviors for high degree of freedom, mobile robots will allow them to accomplish increasingly difficult tasks in the field. We are investigating how the coordinated use of the body, legs, and integrated manipulator, on a mobile robot, can improve the strength, velocity, and workspace when handling heavy objects. We envision that such a capability would aid in a search and rescue scenario when clearing obstacles from a path or searching a rubble pile quickly. Manipulating heavy objects is especially challenging because the dynamic forces are high and a legged system must coordinate all its degrees of freedom to accomplish tasks while maintaining balance. To accomplish these types of manipulation tasks, we use trajectory optimization techniques to generate feasible open-loop behaviors for our 28 dof quadruped robot (BigDog) by planning trajectories in a 13 dimensional space. We apply the Covariance Matrix Adaptation (CMA) algorithm to solve for trajectories that optimize task performance while also obeying important constraints such as torque and velocity limits, kinematic limits, and center of pressure location. These open-loop behaviors are then used to generate desired feed-forward body forces and foot step locations, which enable tracking on the robot. Some hardware results for cinderblock throwing are demonstrated on the BigDog quadruped platform augmented with a human-arm-like manipulator. The results are analogous to how a human athlete maximizes distance in the discus event by performing a precise sequence of choreographed steps.

  20. ODYSSEUS autonomous walking robot: The leg/arm design

    NASA Technical Reports Server (NTRS)

    Bourbakis, N. G.; Maas, M.; Tascillo, A.; Vandewinckel, C.

    1994-01-01

    ODYSSEUS is an autonomous walking robot, which makes use of three wheels and three legs for its movement in the free navigation space. More specifically, it makes use of its autonomous wheels to move around in an environment where the surface is smooth and not uneven. However, in the case that there are small height obstacles, stairs, or small height unevenness in the navigation environment, the robot makes use of both wheels and legs to travel efficiently. In this paper we present the detailed hardware design and the simulated behavior of the extended leg/arm part of the robot, since it plays a very significant role in the robot actions (movements, selection of objects, etc.). In particular, the leg/arm consists of three major parts: The first part is a pipe attached to the robot base with a flexible 3-D joint. This pipe has a rotated bar as an extended part, which terminates in a 3-D flexible joint. The second part of the leg/arm is also a pipe similar to the first. The extended bar of the second part ends at a 2-D joint. The last part of the leg/arm is a clip-hand. It is used for selecting several small weight and size objects, and when it is in a 'closed' mode, it is used as a supporting part of the robot leg. The entire leg/arm part is controlled and synchronized by a microcontroller (68CH11) attached to the robot base.

  1. Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction

    PubMed Central

    Cruz Zurian, Heber; Atefi, Seyed Reza; Seoane Martinez, Fernando; Lukowicz, Paul

    2017-01-01

    In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93.3% accuracy from a known group of participants, and 89.1% from strangers. PMID:29120389

  2. Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction.

    PubMed

    Zhou, Bo; Altamirano, Carlos Andres Velez; Zurian, Heber Cruz; Atefi, Seyed Reza; Billing, Erik; Martinez, Fernando Seoane; Lukowicz, Paul

    2017-11-09

    In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm 2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments' contribution. The best performing feature-classifier combination can recognize the gestures with a 93 . 3 % accuracy from a known group of participants, and 89 . 1 % from strangers.

  3. Obstacle penetrating dynamic radar imaging system

    DOEpatents

    Romero, Carlos E [Livermore, CA; Zumstein, James E [Livermore, CA; Chang, John T [Danville, CA; Leach, Jr Richard R. [Castro Valley, CA

    2006-12-12

    An obstacle penetrating dynamic radar imaging system for the detection, tracking, and imaging of an individual, animal, or object comprising a multiplicity of low power ultra wideband radar units that produce a set of return radar signals from the individual, animal, or object, and a processing system for said set of return radar signals for detection, tracking, and imaging of the individual, animal, or object. The system provides a radar video system for detecting and tracking an individual, animal, or object by producing a set of return radar signals from the individual, animal, or object with a multiplicity of low power ultra wideband radar units, and processing said set of return radar signals for detecting and tracking of the individual, animal, or object.

  4. Virtual Reality Simulator Systems in Robotic Surgical Training.

    PubMed

    Mangano, Alberto; Gheza, Federico; Giulianotti, Pier Cristoforo

    2018-06-01

    The number of robotic surgical procedures has been increasing worldwide. It is important to maximize the cost-effectiveness of robotic surgical training and safely reduce the time needed for trainees to reach proficiency. The use of preliminary lab training in robotic skills is a good strategy for the rapid acquisition of further, standardized robotic skills. Such training can be done either by using a simulator or by exercises in a dry or wet lab. While the use of an actual robotic surgical system for training may be problematic (high cost, lack of availability), virtual reality (VR) simulators can overcome many of these obstacles. However, there is still a lack of standardization. Although VR training systems have improved, they cannot yet replace experience in a wet lab. In particular, simulated scenarios are not yet close enough to a real operative experience. Indeed, there is a difference between technical skills (i.e., mechanical ability to perform a simulated task) and surgical competence (i.e., ability to perform a real surgical operation). Thus, while a VR simulator can replace a dry lab, it cannot yet replace training in a wet lab or operative training in actual patients. However, in the near future, it is expected that VR surgical simulators will be able to provide total reality simulation and replace training in a wet lab. More research is needed to produce more wide-ranging, trans-specialty robotic curricula.

  5. Wide-Baseline Stereo-Based Obstacle Mapping for Unmanned Surface Vehicles

    PubMed Central

    Mou, Xiaozheng; Wang, Han

    2018-01-01

    This paper proposes a wide-baseline stereo-based static obstacle mapping approach for unmanned surface vehicles (USVs). The proposed approach eliminates the complicated calibration work and the bulky rig in our previous binocular stereo system, and raises the ranging ability from 500 to 1000 m with a even larger baseline obtained from the motion of USVs. Integrating a monocular camera with GPS and compass information in this proposed system, the world locations of the detected static obstacles are reconstructed while the USV is traveling, and an obstacle map is then built. To achieve more accurate and robust performance, multiple pairs of frames are leveraged to synthesize the final reconstruction results in a weighting model. Experimental results based on our own dataset demonstrate the high efficiency of our system. To the best of our knowledge, we are the first to address the task of wide-baseline stereo-based obstacle mapping in a maritime environment. PMID:29617293

  6. Physics-based approach to chemical source localization using mobile robotic swarms

    NASA Astrophysics Data System (ADS)

    Zarzhitsky, Dimitri

    2008-07-01

    Recently, distributed computation has assumed a dominant role in the fields of artificial intelligence and robotics. To improve system performance, engineers are combining multiple cooperating robots into cohesive collectives called swarms. This thesis illustrates the application of basic principles of physicomimetics, or physics-based design, to swarm robotic systems. Such principles include decentralized control, short-range sensing and low power consumption. We show how the application of these principles to robotic swarms results in highly scalable, robust, and adaptive multi-robot systems. The emergence of these valuable properties can be predicted with the help of well-developed theoretical methods. In this research effort, we have designed and constructed a distributed physicomimetics system for locating sources of airborne chemical plumes. This task, called chemical plume tracing (CPT), is receiving a great deal of attention due to persistent homeland security threats. For this thesis, we have created a novel CPT algorithm called fluxotaxis that is based on theoretical principles of fluid dynamics. Analytically, we show that fluxotaxis combines the essence, as well as the strengths, of the two most popular biologically-inspired CPT methods-- chemotaxis and anemotaxis. The chemotaxis strategy consists of navigating in the direction of the chemical density gradient within the plume, while the anemotaxis approach is based on an upwind traversal of the chemical cloud. Rigorous and extensive experimental evaluations have been performed in simulated chemical plume environments. Using a suite of performance metrics that capture the salient aspects of swarm-specific behavior, we have been able to evaluate and compare the three CPT algorithms. We demonstrate the improved performance of our fluxotaxis approach over both chemotaxis and anemotaxis in these realistic simulation environments, which include obstacles. To test our understanding of CPT on actual hardware

  7. An optimal control strategy for collision avoidance of mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An optimal control formulation of the problem of collision avoidance of mobile robots in environments containing moving obstacles is presented. Collision avoidance is guaranteed if the minimum distance between the robot and the objects is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Furthermore, time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. Simulation results verify the value of the proposed strategy.

  8. Two-dimensional radial laser scanning for circular marker detection and external mobile robot tracking.

    PubMed

    Teixidó, Mercè; Pallejà, Tomàs; Font, Davinia; Tresanchez, Marcel; Moreno, Javier; Palacín, Jordi

    2012-11-28

    This paper presents the use of an external fixed two-dimensional laser scanner to detect cylindrical targets attached to moving devices, such as a mobile robot. This proposal is based on the detection of circular markers in the raw data provided by the laser scanner by applying an algorithm for outlier avoidance and a least-squares circular fitting. Some experiments have been developed to empirically validate the proposal with different cylindrical targets in order to estimate the location and tracking errors achieved, which are generally less than 20 mm in the area covered by the laser sensor. As a result of the validation experiments, several error maps have been obtained in order to give an estimate of the uncertainty of any location computed. This proposal has been validated with a medium-sized mobile robot with an attached cylindrical target (diameter 200 mm). The trajectory of the mobile robot was estimated with an average location error of less than 15 mm, and the real location error in each individual circular fitting was similar to the error estimated with the obtained error maps. The radial area covered in this validation experiment was up to 10 m, a value that depends on the radius of the cylindrical target and the radial density of the distance range points provided by the laser scanner but this area can be increased by combining the information of additional external laser scanners.

  9. Sensorimotor Model of Obstacle Avoidance in Echolocating Bats

    PubMed Central

    Vanderelst, Dieter; Holderied, Marc W.; Peremans, Herbert

    2015-01-01

    Bat echolocation is an ability consisting of many subtasks such as navigation, prey detection and object recognition. Understanding the echolocation capabilities of bats comes down to isolating the minimal set of acoustic cues needed to complete each task. For some tasks, the minimal cues have already been identified. However, while a number of possible cues have been suggested, little is known about the minimal cues supporting obstacle avoidance in echolocating bats. In this paper, we propose that the Interaural Intensity Difference (IID) and travel time of the first millisecond of the echo train are sufficient cues for obstacle avoidance. We describe a simple control algorithm based on the use of these cues in combination with alternating ear positions modeled after the constant frequency bat Rhinolophus rouxii. Using spatial simulations (2D and 3D), we show that simple phonotaxis can steer a bat clear from obstacles without performing a reconstruction of the 3D layout of the scene. As such, this paper presents the first computationally explicit explanation for obstacle avoidance validated in complex simulated environments. Based on additional simulations modelling the FM bat Phyllostomus discolor, we conjecture that the proposed cues can be exploited by constant frequency (CF) bats and frequency modulated (FM) bats alike. We hypothesize that using a low level yet robust cue for obstacle avoidance allows bats to comply with the hard real-time constraints of this basic behaviour. PMID:26502063

  10. Millimeter-wave data acquisition for terrain mapping, obstacle detection, and dust penetrating capability testing

    NASA Astrophysics Data System (ADS)

    Schmerwitz, S.; Doehler, H.-U.; Ellis, K.; Jennings, S.

    2011-06-01

    The DLR project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites) is devoted to demonstrating and evaluating the characteristics of sensors for helicopter operations in degraded visual environments. Millimeter wave radar is one of the many sensors considered for use in brown-out. It delivers a lower angular resolution compared to other sensors, however it may provide the best dust penetration capabilities. In cooperation with the NRC, flight tests on a Bell 205 were conducted to gather sensor data from a 35 GHz pencil beam radar for terrain mapping, obstacle detection and dust penetration. In this paper preliminary results from the flight trials at NRC are presented and a description of the radars general capability is shown. Furthermore, insight is provided into the concept of multi-sensor fusion as attempted in the ALLFlight project.

  11. Path optimisation of a mobile robot using an artificial neural network controller

    NASA Astrophysics Data System (ADS)

    Singh, M. K.; Parhi, D. R.

    2011-01-01

    This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.

  12. A decision-tree model to detect post-calving diseases based on rumination, activity, milk yield, BW and voluntary visits to the milking robot.

    PubMed

    Steensels, M; Antler, A; Bahr, C; Berckmans, D; Maltz, E; Halachmi, I

    2016-09-01

    Early detection of post-calving health problems is critical for dairy operations. Separating sick cows from the herd is important, especially in robotic-milking dairy farms, where searching for a sick cow can disturb the other cows' routine. The objectives of this study were to develop and apply a behaviour- and performance-based health-detection model to post-calving cows in a robotic-milking dairy farm, with the aim of detecting sick cows based on available commercial sensors. The study was conducted in an Israeli robotic-milking dairy farm with 250 Israeli-Holstein cows. All cows were equipped with rumination- and neck-activity sensors. Milk yield, visits to the milking robot and BW were recorded in the milking robot. A decision-tree model was developed on a calibration data set (historical data of the 10 months before the study) and was validated on the new data set. The decision model generated a probability of being sick for each cow. The model was applied once a week just before the veterinarian performed the weekly routine post-calving health check. The veterinarian's diagnosis served as a binary reference for the model (healthy-sick). The overall accuracy of the model was 78%, with a specificity of 87% and a sensitivity of 69%, suggesting its practical value.

  13. Thermal Image Sensing Model for Robotic Planning and Search

    PubMed Central

    Castro Jiménez, Lídice E.; Martínez-García, Edgar A.

    2016-01-01

    This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image’s intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot’s course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach. PMID:27509510

  14. An assessment of auditory-guided locomotion in an obstacle circumvention task.

    PubMed

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2016-06-01

    This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.

  15. A Remote Lab for Experiments with a Team of Mobile Robots

    PubMed Central

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-01-01

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab. PMID:25192316

  16. A remote lab for experiments with a team of mobile robots.

    PubMed

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-09-04

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab.

  17. Adaptive walking of a quadrupedal robot based on layered biological reflexes

    NASA Astrophysics Data System (ADS)

    Zhang, Xiuli; Mingcheng, E.; Zeng, Xiangyu; Zheng, Haojun

    2012-07-01

    A multiple-legged robot is traditionally controlled by using its dynamic model. But the dynamic-model-based approach fails to acquire satisfactory performances when the robot faces rough terrains and unknown environments. Referring animals' neural control mechanisms, a control model is built for a quadruped robot walking adaptively. The basic rhythmic motion of the robot is controlled by a well-designed rhythmic motion controller(RMC) comprising a central pattern generator(CPG) for hip joints and a rhythmic coupler (RC) for knee joints. CPG and RC have relationships of motion-mapping and rhythmic couple. Multiple sensory-motor models, abstracted from the neural reflexes of a cat, are employed. These reflex models are organized and thus interact with the CPG in three layers, to meet different requirements of complexity and response time to the tasks. On the basis of the RMC and layered biological reflexes, a quadruped robot is constructed, which can clear obstacles and walk uphill and downhill autonomously, and make a turn voluntarily in uncertain environments, interacting with the environment in a way similar to that of an animal. The paper provides a biologically inspired architecture, with which a robot can walk adaptively in uncertain environments in a simple and effective way, and achieve better performances.

  18. Dynamic Obstacle Avoidance for Unmanned Underwater Vehicles Based on an Improved Velocity Obstacle Method

    PubMed Central

    Zhang, Wei; Wei, Shilin; Teng, Yanbin; Zhang, Jianku; Wang, Xiufang; Yan, Zheping

    2017-01-01

    In view of a dynamic obstacle environment with motion uncertainty, we present a dynamic collision avoidance method based on the collision risk assessment and improved velocity obstacle method. First, through the fusion optimization of forward-looking sonar data, the redundancy of the data is reduced and the position, size and velocity information of the obstacles are obtained, which can provide an accurate decision-making basis for next-step collision avoidance. Second, according to minimum meeting time and the minimum distance between the obstacle and unmanned underwater vehicle (UUV), this paper establishes the collision risk assessment model, and screens key obstacles to avoid collision. Finally, the optimization objective function is established based on the improved velocity obstacle method, and a UUV motion characteristic is used to calculate the reachable velocity sets. The optimal collision speed of UUV is searched in velocity space. The corresponding heading and speed commands are calculated, and outputted to the motion control module. The above is the complete dynamic obstacle avoidance process. The simulation results show that the proposed method can obtain a better collision avoidance effect in the dynamic environment, and has good adaptability to the unknown dynamic environment. PMID:29186878

  19. A GPU-accelerated cortical neural network model for visually guided robot navigation.

    PubMed

    Beyeler, Michael; Oros, Nicolas; Dutt, Nikil; Krichmar, Jeffrey L

    2015-12-01

    Humans and other terrestrial animals use vision to traverse novel cluttered environments with apparent ease. On one hand, although much is known about the behavioral dynamics of steering in humans, it remains unclear how relevant perceptual variables might be represented in the brain. On the other hand, although a wealth of data exists about the neural circuitry that is concerned with the perception of self-motion variables such as the current direction of travel, little research has been devoted to investigating how this neural circuitry may relate to active steering control. Here we present a cortical neural network model for visually guided navigation that has been embodied on a physical robot exploring a real-world environment. The model includes a rate based motion energy model for area V1, and a spiking neural network model for cortical area MT. The model generates a cortical representation of optic flow, determines the position of objects based on motion discontinuities, and combines these signals with the representation of a goal location to produce motor commands that successfully steer the robot around obstacles toward the goal. The model produces robot trajectories that closely match human behavioral data. This study demonstrates how neural signals in a model of cortical area MT might provide sufficient motion information to steer a physical robot on human-like paths around obstacles in a real-world environment, and exemplifies the importance of embodiment, as behavior is deeply coupled not only with the underlying model of brain function, but also with the anatomical constraints of the physical body it controls. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A Decentralized Framework for Multi-Agent Robotic Systems

    PubMed Central

    2018-01-01

    Over the past few years, decentralization of multi-agent robotic systems has become an important research area. These systems do not depend on a central control unit, which enables the control and assignment of distributed, asynchronous and robust tasks. However, in some cases, the network communication process between robotic agents is overlooked, and this creates a dependency for each agent to maintain a permanent link with nearby units to be able to fulfill its goals. This article describes a communication framework, where each agent in the system can leave the network or accept new connections, sending its information based on the transfer history of all nodes in the network. To this end, each agent needs to comply with four processes to participate in the system, plus a fifth process for data transfer to the nearest nodes that is based on Received Signal Strength Indicator (RSSI) and data history. To validate this framework, we use differential robotic agents and a monitoring agent to generate a topological map of an environment with the presence of obstacles. PMID:29389849

  1. Small, Untethered, Mobile Robots for Inspecting Gas Pipes

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian

    2003-01-01

    Small, untethered mobile robots denoted gas-pipe explorers (GPEXs) have been proposed for inspecting the interiors of pipes used in the local distribution natural gas. The United States has network of gas-distribution pipes with a total length of approximately 109 m. These pipes are often made of iron and steel and some are more than 100 years old. As this network ages, there is a need to locate weaknesses that necessitate repair and/or preventive maintenance. The most common weaknesses are leaks and reductions in thickness, which are caused mostly by chemical reactions between the iron in the pipes and various substances in soil and groundwater. At present, mobile robots called pigs are used to inspect and clean the interiors of gas-transmission pipelines. Some carry magnetic-flux-leakage (MFL) sensors for measuring average wall thicknesses, some capture images, and some measure sizes and physical conditions. The operating ranges of pigs are limited to fairly straight sections of wide transmission- type (as distinguished from distribution- type) pipes: pigs are too large to negotiate such obstacles as bends with radii comparable to or smaller than pipe diameters, intrusions of other pipes at branch connections, and reductions in diameter at valves and meters. The GPEXs would be smaller and would be able to negotiate sharp bends and other obstacles that typically occur in gas-distribution pipes.

  2. Impact of Discrete Corrections in a Modular Approach for Trajectory Generation in Quadruped Robots

    NASA Astrophysics Data System (ADS)

    Pinto, Carla M. A.; Santos, Cristina P.; Rocha, Diana; Matos, Vítor

    2011-09-01

    Online generation of trajectories in robots is a very complex task that involves the combination of different types of movements, i.e., distinct motor primitives. The later are used to model complex behaviors in robots, such as locomotion in irregular terrain and obstacle avoidance. In this paper, we consider two motor primitives: rhythmic and discrete. We study the effect on the robots' gaits of superimposing the two motor primitives, considering two distinct types of coupling. Additionally, we simulate two scenarios, where the discrete primitive is inserted in all of the four limbs, or is inserted in ipsilateral pairs of limbs. Numerical results show that amplitude and frequency of the periodic solutions, corresponding to the gaits trot and pace, are almost constant for diffusive and synaptic couplings.

  3. A deformable spherical planet exploration robot

    NASA Astrophysics Data System (ADS)

    Liang, Yi-shan; Zhang, Xiu-li; Huang, Hao; Yang, Yan-feng; Jin, Wen-tao; Sang, Zhong-xun

    2013-03-01

    In this paper, a deformable spherical planet exploration robot has been introduced to achieve the task of environmental detection in outer space or extreme conditions. The robot imitates the morphology structure and motion mechanism of tumbleweeds. The robot is wind-driven. It consists of an axle, a spherical steel skeleton and twelve airbags. The axle is designed as two parts. The robot contracts by contracting the two-part axle. The spherical robot installs solar panels to provide energy for its control system.

  4. Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications

    PubMed Central

    Moccia, Antonio

    2014-01-01

    Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154

  5. Emergence of leadership in a robotic fish group under diverging individual personality traits.

    PubMed

    Wang, Chen; Chen, Xiaojie; Xie, Guangming; Cao, Ming

    2017-05-01

    Variations of individual's personality traits have been identified before as one of the possible mechanisms for the emergence of leadership in an interactive collective, which may lead to benefits for the group as a whole. Complementing the large number of existing literatures on using simulation models to study leadership, we use biomimetic robotic fish to gain insight into how the fish's behaviours evolve under the influence of the physical hydrodynamics. In particular, we focus in this paper on understanding how robotic fish's personality traits affect the emergence of an effective leading fish in repeated robotic foraging tasks when the robotic fish's strategies, to push or not to push the obstacle in its foraging path, are updated over time following an evolutionary game set-up. We further show that the robotic fish's personality traits diverge when the group carries out difficult foraging tasks in our experiments, and self-organization takes place to help the group to adapt to the level of difficulties of the tasks without inter-individual communication.

  6. The application of Markov decision process with penalty function in restaurant delivery robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional Markov decision process path planning algorithm is not save, the robot is very close to the table and chairs. To solve this problem, this paper proposes the Markov Decision Process with a penalty term called MDPPT path planning algorithm according to the traditional Markov decision process (MDP). For MDP, if the restaurant delivery robot bumps into an obstacle, the reward it receives is part of the current status reward. For the MDPPT, the reward it receives not only the part of the current status but also a negative constant term. Simulation results show that the MDPPT algorithm can plan a more secure path.

  7. The second “time-out”: a surgical safety checklist for lengthy robotic surgeries

    PubMed Central

    2013-01-01

    Robotic surgeries of long duration are associated with both increased risks to patients as well as distinct challenges for care providers. We propose a surgical checklist, to be completed during a second “time-out”, aimed at reducing peri-operative complications and addressing obstacles presented by lengthy robotic surgeries. A review of the literature was performed to identify the most common complications of robotic surgeries with extended operative times. A surgical checklist was developed with the goal of addressing these issues and maximizing patient safety. Extended operative times during robotic surgery increase patient risk for position-related complications and other adverse events. These cases also raise concerns for surgical, anesthesia, and nursing staff which are less common in shorter, non-robotic operations. Key elements of the checklist were designed to coordinate operative staff in verifying patient safety while addressing the unique concerns within each specialty. As robotic surgery is increasingly utilized, operations with long surgical times may become more common due to increased case complexity and surgeons overcoming the learning curve. A standardized surgical checklist, conducted three to four hours after the start of surgery, may enhance perioperative patient safety and quality of care. PMID:23731776

  8. Short range laser obstacle detector. [for surface vehicles using laser diode array

    NASA Technical Reports Server (NTRS)

    Kuriger, W. L. (Inventor)

    1973-01-01

    A short range obstacle detector for surface vehicles is described which utilizes an array of laser diodes. The diodes operate one at a time, with one diode for each adjacent azimuth sector. A vibrating mirror a short distance above the surface provides continuous scanning in elevation for all azimuth sectors. A diode laser is synchronized with the vibrating mirror to enable one diode laser to be fired, by pulses from a clock pulse source, a number of times during each elevation scan cycle. The time for a given pulse of light to be reflected from an obstacle and received is detected as a measure of range to the obstacle.

  9. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  10. Fast intersection detection algorithm for PC-based robot off-line programming

    NASA Astrophysics Data System (ADS)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  11. Numerical evaluation of mobile robot navigation in static indoor environment via EGAOR Iteration

    NASA Astrophysics Data System (ADS)

    Dahalan, A. A.; Saudi, A.; Sulaiman, J.; Din, W. R. W.

    2017-09-01

    One of the key issues in mobile robot navigation is the ability for the robot to move from an arbitrary start location to a specified goal location without colliding with any obstacles while traveling, also known as mobile robot path planning problem. In this paper, however, we examined the performance of a robust searching algorithm that relies on the use of harmonic potentials of the environment to generate smooth and safe path for mobile robot navigation in a static known indoor environment. The harmonic potentials will be discretized by using Laplacian’s operator to form a system of algebraic approximation equations. This algebraic linear system will be computed via 4-Point Explicit Group Accelerated Over-Relaxation (4-EGAOR) iterative method for rapid computation. The performance of the proposed algorithm will then be compared and analyzed against the existing algorithms in terms of number of iterations and execution time. The result shows that the proposed algorithm performed better than the existing methods.

  12. An architectural approach to create self organizing control systems for practical autonomous robots

    NASA Technical Reports Server (NTRS)

    Greiner, Helen

    1991-01-01

    For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.

  13. An Algorithm for Pedestrian Detection in Multispectral Image Sequences

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Fedorenko, V. V.

    2017-05-01

    The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.

  14. Kinematic path planning for space-based robotics

    NASA Astrophysics Data System (ADS)

    Seereeram, Sanjeev; Wen, John T.

    1998-01-01

    Future space robotics tasks require manipulators of significant dexterity, achievable through kinematic redundancy and modular reconfigurability, but with a corresponding complexity of motion planning. Existing research aims for full autonomy and completeness, at the expense of efficiency, generality or even user friendliness. Commercial simulators require user-taught joint paths-a significant burden for assembly tasks subject to collision avoidance, kinematic and dynamic constraints. Our research has developed a Kinematic Path Planning (KPP) algorithm which bridges the gap between research and industry to produce a powerful and useful product. KPP consists of three key components: path-space iterative search, probabilistic refinement, and an operator guidance interface. The KPP algorithm has been successfully applied to the SSRMS for PMA relocation and dual-arm truss assembly tasks. Other KPP capabilities include Cartesian path following, hybrid Cartesian endpoint/intermediate via-point planning, redundancy resolution and path optimization. KPP incorporates supervisory (operator) input at any detail to influence the solution, yielding desirable/predictable paths for multi-jointed arms, avoiding obstacles and obeying manipulator limits. This software will eventually form a marketable robotic planner suitable for commercialization in conjunction with existing robotic CAD/CAM packages.

  15. Full autonomous microline trace robot

    NASA Astrophysics Data System (ADS)

    Yi, Deer; Lu, Si; Yan, Yingbai; Jin, Guofan

    2000-10-01

    Optoelectric inspection may find applications in robotic system. In micro robotic system, smaller optoelectric inspection system is preferred. However, as miniaturizing the size of the robot, the number of the optoelectric detector becomes lack. And lack of the information makes the micro robot difficult to acquire its status. In our lab, a micro line trace robot has been designed, which autonomous acts based on its optoelectric detection. It has been programmed to follow a black line printed on the white colored ground. Besides the optoelectric inspection, logical algorithm in the microprocessor is also important. In this paper, we propose a simply logical algorithm to realize robot's intelligence. The robot's intelligence is based on a AT89C2051 microcontroller which controls its movement. The technical details of the micro robot are as follow: dimension: 30mm*25mm*35*mm; velocity: 60mm/s.

  16. Development of Multi-Legged Walking Robot Using Reconfigurable Modular Design and Biomimetic Control Architecture

    NASA Astrophysics Data System (ADS)

    Chen, Xuedong; Sun, Yi; Huang, Qingjiu; Jia, Wenchuan; Pu, Huayan

    This paper focuses on the design of a modular multi-legged walking robot MiniQuad-I, which can be reconfigured into variety configurations, including quadruped and hexapod configurations for different tasks by changing the layout of modules. Critical design considerations when taking the adaptability, maintainability and extensibility in count simultaneously are discussed and then detailed designs of each module are presented. The biomimetic control architecture of MiniQuad-I is proposed, which can improve the capability of agility and independence of the robot. Simulations and experiments on crawling, object picking and obstacle avoiding are performed to verify functions of the MiniQuad-I.

  17. Neural network-based multiple robot simultaneous localization and mapping.

    PubMed

    Saeedi, Sajad; Paull, Liam; Trentini, Michael; Li, Howard

    2011-12-01

    In this paper, a decentralized platform for simultaneous localization and mapping (SLAM) with multiple robots is developed. Each robot performs single robot view-based SLAM using an extended Kalman filter to fuse data from two encoders and a laser ranger. To extend this approach to multiple robot SLAM, a novel occupancy grid map fusion algorithm is proposed. Map fusion is achieved through a multistep process that includes image preprocessing, map learning (clustering) using neural networks, relative orientation extraction using norm histogram cross correlation and a Radon transform, relative translation extraction using matching norm vectors, and then verification of the results. The proposed map learning method is a process based on the self-organizing map. In the learning phase, the obstacles of the map are learned by clustering the occupied cells of the map into clusters. The learning is an unsupervised process which can be done on the fly without any need to have output training patterns. The clusters represent the spatial form of the map and make further analyses of the map easier and faster. Also, clusters can be interpreted as features extracted from the occupancy grid map so the map fusion problem becomes a task of matching features. Results of the experiments from tests performed on a real environment with multiple robots prove the effectiveness of the proposed solution.

  18. Micro air vehicle autonomous obstacle avoidance from stereo-vision

    NASA Astrophysics Data System (ADS)

    Brockers, Roland; Kuwata, Yoshiaki; Weiss, Stephan; Matthies, Lawrence

    2014-06-01

    We introduce a new approach for on-board autonomous obstacle avoidance for micro air vehicles flying outdoors in close proximity to structure. Our approach uses inverse-range, polar-perspective stereo-disparity maps for obstacle detection and representation, and deploys a closed-loop RRT planner that considers flight dynamics for trajectory generation. While motion planning is executed in 3D space, we reduce collision checking to a fast z-buffer-like operation in disparity space, which allows for significant speed-up compared to full 3d methods. Evaluations in simulation illustrate the robustness of our approach, whereas real world flights under tree canopy demonstrate the potential of the approach.

  19. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    PubMed

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.

  20. Kinematics and the implementation of an elephant's trunk manipulator and other continuum style robots.

    PubMed

    Hannan, Michael W; Walker, Ian D

    2003-02-01

    Traditionally, robot manipulators have been a simple arrangement of a small number of serially connected links and actuated joints. Though these manipulators prove to be very effective for many tasks, they are not without their limitations, due mainly to their lack of maneuverability or total degrees of freedom. Continuum style (i.e., continuous "back-bone") robots, on the other hand, exhibit a wide range of maneuverability, and can have a large number of degrees of freedom. The motion of continuum style robots is generated through the bending of the robot over a given section; unlike traditional robots where the motion occurs in discrete locations, i.e., joints. The motion of continuum manipulators is often compared to that of biological manipulators such as trunks and tentacles. These continuum style robots can achieve motions that could only be obtainable by a conventionally designed robot with many more degrees of freedom. In this paper we present a detailed formulation and explanation of a novel kinematic model for continuum style robots. The design, construction, and implementation of our continuum style robot called the elephant trunk manipulator is presented. Experimental results are then provided to verify the legitimacy of our model when applied to our physical manipulator. We also provide a set of obstacle avoidance experiments that help to exhibit the practical implementation of both our manipulator and our kinematic model. c2003 Wiley Periodicals, Inc.

  1. Kinematics and the implementation of an elephant's trunk manipulator and other continuum style robots

    NASA Technical Reports Server (NTRS)

    Hannan, Michael W.; Walker, Ian D.

    2003-01-01

    Traditionally, robot manipulators have been a simple arrangement of a small number of serially connected links and actuated joints. Though these manipulators prove to be very effective for many tasks, they are not without their limitations, due mainly to their lack of maneuverability or total degrees of freedom. Continuum style (i.e., continuous "back-bone") robots, on the other hand, exhibit a wide range of maneuverability, and can have a large number of degrees of freedom. The motion of continuum style robots is generated through the bending of the robot over a given section; unlike traditional robots where the motion occurs in discrete locations, i.e., joints. The motion of continuum manipulators is often compared to that of biological manipulators such as trunks and tentacles. These continuum style robots can achieve motions that could only be obtainable by a conventionally designed robot with many more degrees of freedom. In this paper we present a detailed formulation and explanation of a novel kinematic model for continuum style robots. The design, construction, and implementation of our continuum style robot called the elephant trunk manipulator is presented. Experimental results are then provided to verify the legitimacy of our model when applied to our physical manipulator. We also provide a set of obstacle avoidance experiments that help to exhibit the practical implementation of both our manipulator and our kinematic model. c2003 Wiley Periodicals, Inc.

  2. Robot-Assisted Needle Steering

    PubMed Central

    Reed, Kyle B.; Majewicz, Ann; Kallem, Vinutha; Alterovitz, Ron; Goldberg, Ken; Cowan, Noah J.; Okamura, Allison M.

    2012-01-01

    Needle insertion is a critical aspect of many medical treatments, diagnostic methods, and scientific studies, and is considered to be one of the simplest and most minimally invasive medical procedures. Robot-assisted needle steering has the potential to improve the effectiveness of existing medical procedures and enable new ones by allowing increased accuracy through more dexterous control of the needle tip path and acquisition of targets not accessible by straight-line trajectories. In this article, we describe a robot-assisted needle steering system that uses three integrated controllers: a motion planner concerned with guiding the needle around obstacles to a target in a desired plane, a planar controller that maintains the needle in the desired plane, and a torsion compensator that controls the needle tip orientation about the axis of the needle shaft. Experimental results from steering an asymmetric-tip needle in artificial tissue demonstrate the effectiveness of the system and its sensitivity to various environmental and control parameters. In addition, we show an example of needle steering in ex vivo biological tissue to accomplish a clinically relevant task, and highlight challenges of practical needle steering implementation. PMID:23028210

  3. Intelligent Articulated Robot

    NASA Astrophysics Data System (ADS)

    Nyein, Aung Kyaw; Thu, Theint Theint

    2008-10-01

    In this paper, an articulated type of industrial used robot is discussed. The robot is mainly intended to be used in pick and place operation. It will sense the object at the specified place and move it to a desired location. A peripheral interface controller (PIC16F84A) is used as the main controller of the robot. Infrared LED and IR receiver unit for object detection and 4-bit bidirectional universal shift registers (74LS194) and high current and high voltage Darlington transistors arrays (ULN2003) for driving the arms' motors are used in this robot. The amount of rotation for each arm is regulated by the limit switches. The operation of the robot is very simple but it has the ability of to overcome resetting position after power failure. It can continue its work from the last position before the power is failed without needing to come back to home position.

  4. Bridging the Gap in Military Robotics (Combler le Fosse Existant dans le Domaine de la Robotique Militaire)

    DTIC Science & Technology

    2008-11-01

    systems must be evaluated at the platform level as well ( regenerative braking and similar systems). 4.4.4 The Important Gaps Several gaps on robot...in three main categories : • Mobility function: • Obstacle avoidance and negotiation; • Terrain modelling and classification; and • Transport in

  5. A Kalman-Filter-Based Common Algorithm Approach for Object Detection in Surgery Scene to Assist Surgeon's Situation Awareness in Robot-Assisted Laparoscopic Surgery

    PubMed Central

    2018-01-01

    Although the use of the surgical robot is rapidly expanding for various medical treatments, there still exist safety issues and concerns about robot-assisted surgeries due to limited vision through a laparoscope, which may cause compromised situation awareness and surgical errors requiring rapid emergency conversion to open surgery. To assist surgeon's situation awareness and preventive emergency response, this study proposes situation information guidance through a vision-based common algorithm architecture for automatic detection and tracking of intraoperative hemorrhage and surgical instruments. The proposed common architecture comprises the location of the object of interest using feature texture, morphological information, and the tracking of the object based on Kalman filter for robustness with reduced error. The average recall and precision of the instrument detection in four prostate surgery videos were 96% and 86%, and the accuracy of the hemorrhage detection in two prostate surgery videos was 98%. Results demonstrate the robustness of the automatic intraoperative object detection and tracking which can be used to enhance the surgeon's preventive state recognition during robot-assisted surgery. PMID:29854366

  6. Robotic inspection for vehicle-borne contraband

    NASA Astrophysics Data System (ADS)

    Witus, Gary; Gerhart, Grant; Smuda, W.; Andrusz, H.

    2006-05-01

    Vehicle-borne smuggling is widespread because of the availability, flexibility and capacity of the cars and trucks. Inspecting vehicles at border crossings and checkpoints are key security elements. At the present time, most vehicle security inspections at home and abroad are conducted manually. Remotely operated vehicle inspection robots could be integrated into the operating procedures to improve throughput while reducing the workload burden on security personnel. The robotic inspection must be effective at detecting contraband and efficient at clearing the "clean" vehicles that make up the bulk of the traffic stream, while limiting the workload burden on the operators. In this paper, we present a systems engineering approach to robotic vehicle inspection. We review the tactics, techniques and procedures to interdict contraband. We present an operational concept for robotic vehicle inspection within this framework, and identify needed capabilities. We review the technologies currently available to meet these needs. Finally, we summarize the immediate potential and R&D challenges for effective contraband detection robots.

  7. Video rate color region segmentation for mobile robotic applications

    NASA Astrophysics Data System (ADS)

    de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline

    2005-08-01

    Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.

  8. Detecting and Classifying Human Touches in a Social Robot Through Acoustic Sensing and Machine Learning

    PubMed Central

    Alonso-Martín, Fernando; Gamboa-Montero, Juan José; Castillo, José Carlos; Castro-González, Álvaro; Salichs, Miguel Ángel

    2017-01-01

    An important aspect in Human–Robot Interaction is responding to different kinds of touch stimuli. To date, several technologies have been explored to determine how a touch is perceived by a social robot, usually placing a large number of sensors throughout the robot’s shell. In this work, we introduce a novel approach, where the audio acquired from contact microphones located in the robot’s shell is processed using machine learning techniques to distinguish between different types of touches. The system is able to determine when the robot is touched (touch detection), and to ascertain the kind of touch performed among a set of possibilities: stroke, tap, slap, and tickle (touch classification). This proposal is cost-effective since just a few microphones are able to cover the whole robot’s shell since a single microphone is enough to cover each solid part of the robot. Besides, it is easy to install and configure as it just requires a contact surface to attach the microphone to the robot’s shell and plug it into the robot’s computer. Results show the high accuracy scores in touch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best performance, with an F-score of 0.81. The dataset was built with information from 25 participants performing a total of 1981 touch gestures. PMID:28509865

  9. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    PubMed Central

    Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae

    2009-01-01

    In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007

  10. Evolutionary Fuzzy Control and Navigation for Two Wheeled Robots Cooperatively Carrying an Object in Unknown Environments.

    PubMed

    Juang, Chia-Feng; Lai, Min-Ge; Zeng, Wan-Ting

    2015-09-01

    This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.

  11. 3D-Sonification for Obstacle Avoidance in Brownout Conditions

    NASA Technical Reports Server (NTRS)

    Godfroy-Cooper, M.; Miller, J. D.; Szoboszlay, Z.; Wenzel, E. M.

    2017-01-01

    Helicopter brownout is a phenomenon that occurs when making landing approaches in dusty environments, whereby sand or dust particles become swept up in the rotor outwash. Brownout is characterized by partial or total obscuration of the terrain, which degrades visual cues necessary for hovering and safe landing. Furthermore, the motion of the dust cloud produced during brownout can lead to the pilot experiencing motion cue anomalies such as vection illusions. In this context, the stability and guidance control functions can be intermittently or continuously degraded, potentially leading to undetected surface hazards and obstacles as well as unnoticed drift. Safe and controlled landing in brownout can be achieved using an integrated presentation of LADAR and RADAR imagery and aircraft state symbology. However, though detected by the LADAR and displayed on the sensor image, small obstacles can be difficult to discern from the background so that changes in obstacle elevation may go unnoticed. Moreover, pilot workload associated with tracking the displayed symbology is often so high that the pilot cannot give sufficient attention to the LADAR/RADAR image. This paper documents a simulation evaluating the use of 3D auditory cueing for obstacle avoidance in brownout as a replacement for or compliment to LADAR/RADAR imagery.

  12. Learning Semantics of Gestural Instructions for Human-Robot Collaboration

    PubMed Central

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888

  13. Learning Semantics of Gestural Instructions for Human-Robot Collaboration.

    PubMed

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.

  14. Robot gripper

    NASA Technical Reports Server (NTRS)

    Webb, Winston S. (Inventor)

    1987-01-01

    An electronic force-detecting robot gripper for gripping objects and attaching to an external robot arm is disclosed. The gripper comprises motor apparatus, gripper jaws, and electrical circuits for driving the gripper motor and sensing the amount of force applied by the jaws. The force applied by the jaws is proportional to a threshold value of the motor current. When the motor current exceeds the threshold value, the electrical circuits supply a feedback signal to the electrical control circuit which, in turn, stops the gripper motor.

  15. Trust-based learning and behaviors for convoy obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Mikulski, Dariusz G.; Karlsen, Robert E.

    2015-05-01

    In many multi-agent systems, robots within the same team are regarded as being fully trustworthy for cooperative tasks. However, the assumption of trustworthiness is not always justified, which may not only increase the risk of mission failure, but also endanger the lives of friendly forces. In prior work, we addressed this issue by using RoboTrust to dynamically adjust to observed behaviors or recommendations in order to mitigate the risks of illegitimate behaviors. However, in the simulations in prior work, all members of the convoy had knowledge of the convoy goal. In this paper, only the lead vehicle has knowledge of the convoy goals and the follow vehicles must infer trustworthiness strictly from lead vehicle performance. In addition, RoboTrust could only respond to observed performance and did not dynamically learn agent behavior. In this paper, we incorporate an adaptive agent-specific bias into the RoboTrust algorithm that modifies its trust dynamics. This bias is learned incrementally from agent interactions, allowing good agents to benefit from faster trust growth and slower trust decay and bad agents to be penalized with slower trust growth and faster trust decay. We then integrate this new trust model into a trust-based controller for decentralized autonomous convoy operations. We evaluate its performance in an obstacle avoidance mission, where the convoy attempts to learn the best speed and following distances combinations for an acceptable obstacle avoidance probability.

  16. Needs for Robotic Assessments of Nuclear Disasters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor Walker; Derek Wadsworth

    Following the nuclear disaster at the Fukushima nuclear reactor plant in Japan, the need for systems which can assist in dynamic high-radiation environments such as nuclear incidents has become more apparent. The INL participated in delivering robotic technologies to Japan and has identified key components which are needed for success and obstacles to their deployment. In addition, we are proposing new work and methods to improve assessments and reactions to such events in the future. Robotics needs in disaster situations include phases such as: Assessment, Remediation, and Recovery Our particular interest is in the initial assessment activities. In assessment wemore » need collection of environmental parameters, determination of conditions, and physical sample collection. Each phase would require key tools and efforts to develop. This includes study of necessary sensors and their deployment methods, the effects of radiation on sensors and deployment, and the development of training and execution systems.« less

  17. Robots show us how to teach them: feedback from robots shapes tutoring behavior during action learning.

    PubMed

    Vollmer, Anna-Lisa; Mühlig, Manuel; Steil, Jochen J; Pitsch, Karola; Fritsch, Jannik; Rohlfing, Katharina J; Wrede, Britta

    2014-01-01

    Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction.

  18. Robots Show Us How to Teach Them: Feedback from Robots Shapes Tutoring Behavior during Action Learning

    PubMed Central

    Vollmer, Anna-Lisa; Mühlig, Manuel; Steil, Jochen J.; Pitsch, Karola; Fritsch, Jannik; Rohlfing, Katharina J.; Wrede, Britta

    2014-01-01

    Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction. PMID:24646510

  19. Nurse's Aid And Housekeeping Mobile Robot For Use In The Nursing Home Workplace

    NASA Astrophysics Data System (ADS)

    Sines, John A.

    1987-01-01

    The large nursing home market has several natural characteristics which make it a good applications area for robotics. The environment is already robot accessible and the work functions require large quantities of low skilled services on a daily basis. In the near future, a commercial opportunity for the practical application of robots is emerging in the delivery of housekeeping services in the nursing home environment. The robot systems will assist in food tray delivery, material handling, and security, and will perform activities such as changing a resident's table side drinking water twice a day, and taking out the trash. The housekeeping work functions will generate cost savings of approximately 22,000 per year, at a cost of 6,000 per year. Technical system challenges center around the artificial intelligence required for the robot to map its own location within the facility, to find objects, and to avoid obstacles, and the development of an energy efficient mechanical lifting system. The long engineering and licensing cycles (7 to 12 years) required to bring this type of product to market make it difficult to raise capital for such a venture.

  20. Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, Francois G.

    2002-06-01

    Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number

  1. Adapting sensory data for multiple robots performing spill cleanup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storjohann, K.; Saltzen, E.

    1990-09-01

    This paper describes a possible method of converting a single performing robot algorithm into a multiple performing robot algorithm without the need to modify previously written codes. The algorithm to be converted involves spill detection and clean up by the HERMIES-III mobile robot. In order to achieve the goal of multiple performing robots with this algorithm, two steps are taken. First, the task is formally divided into two sub-tasks, spill detection and spill clean-up, the former of which is allocated to the added performing robot, HERMIES-IIB. Second, a inverse perspective mapping, is applied to the data acquired by the newmore » performing robot (HERMIES-IIB), allowing the data to be processed by the previously written algorithm without re-writing the code. 6 refs., 4 figs.« less

  2. Designing a social and assistive robot for seniors.

    PubMed

    Eftring, H; Frennert, S

    2016-06-01

    The development of social assistive robots is an approach with the intention of preventing and detecting falls among seniors. There is a need for a relatively low-cost mobile robot with an arm and a gripper which is small enough to navigate through private homes. User requirements of a social assistive robot were collected using workshops, a questionnaire and interviews. Two prototype versions of a robot were designed, developed and tested by senior citizens (n = 49) in laboratory trials for 2 h each and in the private homes of elderly persons (n = 18) for 3 weeks each. The user requirement analysis resulted in a specification of tasks the robot should be able to do to prevent and detect falls. It was a challenge but possible to design and develop a robot where both the senior and the robot arm could reach the necessary interaction points of the robot. The seniors experienced the robot as happy and friendly. They wanted the robot to be narrower so it could pass through narrow passages in the home and they also wanted it to be able to pass over thresholds without using ramps and to drive over carpets. User trials in seniors' homes are very important to acquire relevant knowledge for developing robots that can handle real life situations in the domestic environment. Very high reliability of a robot is needed to get feedback about how seniors experience the overall behavior of the robot and to find out if the robot could reduce falls and improve the feeling of security for seniors living alone.

  3. Anticipation as a Strategy: A Design Paradigm for Robotics

    NASA Astrophysics Data System (ADS)

    Williams, Mary-Anne; Gärdenfors, Peter; Johnston, Benjamin; Wightwick, Glenn

    Anticipation plays a crucial role during any action, particularly in agents operating in open, complex and dynamic environments. In this paper we consider the role of anticipation as a strategy from a design perspective. Anticipation is a crucial skill in sporting games like soccer, tennis and cricket. We explore the role of anticipation in robot soccer matches in the context of reaching the RoboCup vision to develop a robot soccer team capable of defeating the FIFA World Champions in 2050. Anticipation in soccer can be planned or emergent but whether planned or emergent, anticipation can be designed. Two key obstacles stand in the way of developing more anticipatory robot systems; an impoverished understanding of the "anticipation" process/capability and a lack of know-how in the design of anticipatory systems. Several teams at RoboCup have developed remarkable preemptive behaviors. The CMU Dive and UTS Dodge are two compelling examples. In this paper we take steps towards designing robots that can adopt anticipatory behaviors by proposing an innovative model of anticipation as a strategy that specifies the key characteristics of anticipation behaviors to be developed. The model can drive the design of autonomous systems by providing a means to explore and to represent anticipation requirements. Our approach is to analyze anticipation as a strategy and then to use the insights obtained to design a reference model that can be used to specify a set of anticipatory requirements for guiding an autonomous robot soccer system.

  4. Path Planning for Non-Circular, Non-Holonomic Robots in Highly Cluttered Environments.

    PubMed

    Samaniego, Ricardo; Lopez, Joaquin; Vazquez, Fernando

    2017-08-15

    This paper presents an algorithm for finding a solution to the problem of planning a feasible path for a slender autonomous mobile robot in a large and cluttered environment. The presented approach is based on performing a graph search on a kinodynamic-feasible lattice state space of high resolution; however, the technique is applicable to many search algorithms. With the purpose of allowing the algorithm to consider paths that take the robot through narrow passes and close to obstacles, high resolutions are used for the lattice space and the control set. This introduces new challenges because one of the most computationally expensive parts of path search based planning algorithms is calculating the cost of each one of the actions or steps that could potentially be part of the trajectory. The reason for this is that the evaluation of each one of these actions involves convolving the robot's footprint with a portion of a local map to evaluate the possibility of a collision, an operation that grows exponentially as the resolution is increased. The novel approach presented here reduces the need for these convolutions by using a set of offline precomputed maps that are updated, by means of a partial convolution, as new information arrives from sensors or other sources. Not only does this improve run-time performance, but it also provides support for dynamic search in changing environments. A set of alternative fast convolution methods are also proposed, depending on whether the environment is cluttered with obstacles or not. Finally, we provide both theoretical and experimental results from different experiments and applications.

  5. Goal driven kinematic simulation of flexible arm robot for space station missions

    NASA Technical Reports Server (NTRS)

    Janssen, P.; Choudry, A.

    1987-01-01

    Flexible arms offer a great degree of flexibility in maneuvering in the space environment. The problem of transporting an astronaut for extra-vehicular activity using a space station based flexible arm robot was studied. Inverse kinematic solutions of the multilink structure were developed. The technique is goal driven and can support decision making for configuration selection as required for stability and obstacle avoidance. Details of this technique and results are given.

  6. Intelligent robots for planetary exploration and construction

    NASA Technical Reports Server (NTRS)

    Albus, James S.

    1992-01-01

    Robots capable of practical applications in planetary exploration and construction will require realtime sensory-interactive goal-directed control systems. A reference model architecture based on the NIST Real-time Control System (RCS) for real-time intelligent control systems is suggested. RCS partitions the control problem into four basic elements: behavior generation (or task decomposition), world modeling, sensory processing, and value judgment. It clusters these elements into computational nodes that have responsibility for specific subsystems, and arranges these nodes in hierarchical layers such that each layer has characteristic functionality and timing. Planetary exploration robots should have mobility systems that can safely maneuver over rough surfaces at high speeds. Walking machines and wheeled vehicles with dynamic suspensions are candidates. The technology of sensing and sensory processing has progressed to the point where real-time autonomous path planning and obstacle avoidance behavior is feasible. Map-based navigation systems will support long-range mobility goals and plans. Planetary construction robots must have high strength-to-weight ratios for lifting and positioning tools and materials in six degrees-of-freedom over large working volumes. A new generation of cable-suspended Stewart platform devices and inflatable structures are suggested for lifting and positioning materials and structures, as well as for excavation, grading, and manipulating a variety of tools and construction machinery.

  7. Do characteristics of a stationary obstacle lead to adjustments in obstacle stepping strategies?

    PubMed

    Worden, Timothy A; De Jong, Audrey F; Vallis, Lori Ann

    2016-01-01

    Navigating cluttered and complex environments increases the risk of falling. To decrease this risk, it is important to understand the influence of obstacle visual cues on stepping parameters, however the specific obstacle characteristics that have the greatest influence on avoidance strategies is still under debate. The purpose of the current work is to provide further insight on the relationship between obstacle appearance in the environment and modulation of stepping parameters. Healthy young adults (N=8) first stepped over an obstacle with one visible top edge ("floating"; 8 trials) followed by trials where experimenters randomly altered the location of a ground reference object to one of 7 different positions (8 trials per location), which ranged from 6cm in front of, directly under, or up to 6cm behind the floating obstacle (at 2cm intervals). Mean take-off and landing distance as well as minimum foot clearance values were unchanged across different positions of the ground reference object; a consistent stepping trajectory was observed for all experimental conditions. Contrary to our hypotheses, results of this study indicate that ground based visual cues are not essential for the planning of stepping and clearance strategies. The simultaneous presentation of both floating and ground based objects may have provided critical information that lead to the adoption of a consistent strategy for clearing the top edge of the obstacle. The invariant foot placement observed here may be an appropriate stepping strategy for young adults, however this may not be the case across the lifespan or in special populations. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A study on a wheel-based stair-climbing robot with a hopping mechanism

    NASA Astrophysics Data System (ADS)

    Kikuchi, Koki; Sakaguchi, Keisuke; Sudo, Takayuki; Bushida, Naoki; Chiba, Yasuhiro; Asai, Yuji

    2008-08-01

    In this study, we propose a simple hopping mechanism using the vibration of a two-degree-of-freedom system for a wheel-based stair-climbing robot. The robot, consisting of two bodies connected by springs and a wire, hops by releasing energy stored in the springs and quickly travels using wheels mounted in its lower body. The trajectories of the bodies during hopping change in accordance with the design parameters, such as the reduced mass of the two bodies, the mass ratio between the upper and lower bodies, the spring constant, the control parameters such as the initial contraction of the spring and the wire tension. This property allows the robot to quickly and economically climb up and down stairs, leap over obstacles, and landing softly without complex control. In this paper, the characteristics of hopping motion for the design and control parameters are clarified by both numerical simulations and experiments. Furthermore, using the robot design based on the results the abilities to hop up and down a step, leap over a cable, and land softly are demonstrated.

  9. STARR: shortwave-targeted agile Raman robot for the detection and identification of emplaced explosives

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Gardner, Charles W.

    2014-05-01

    In order to combat the threat of emplaced explosives (land mines, etc.), ChemImage Sensor Systems (CISS) has developed a multi-sensor, robot mounted sensor capable of identification and confirmation of potential threats. The system, known as STARR (Shortwave-infrared Targeted Agile Raman Robot), utilizes shortwave infrared spectroscopy for the identification of potential threats, combined with a visible short-range standoff Raman hyperspectral imaging (HSI) system for material confirmation. The entire system is mounted onto a Talon UGV (Unmanned Ground Vehicle), giving the sensor an increased area search rate and reducing the risk of injury to the operator. The Raman HSI system utilizes a fiber array spectral translator (FAST) for the acquisition of high quality Raman chemical images, allowing for increased sensitivity and improved specificity. An overview of the design and operation of the system will be presented, along with initial detection results of the fusion sensor.

  10. Non linear predictive control of a LEGO mobile robot

    NASA Astrophysics Data System (ADS)

    Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.

    2014-10-01

    Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.

  11. Training industrial robots with gesture recognition techniques

    NASA Astrophysics Data System (ADS)

    Piane, Jennifer; Raicu, Daniela; Furst, Jacob

    2013-01-01

    In this paper we propose to use gesture recognition approaches to track a human hand in 3D space and, without the use of special clothing or markers, be able to accurately generate code for training an industrial robot to perform the same motion. The proposed hand tracking component includes three methods: a color-thresholding model, naïve Bayes analysis and Support Vector Machine (SVM) to detect the human hand. Next, it performs stereo matching on the region where the hand was detected to find relative 3D coordinates. The list of coordinates returned is expectedly noisy due to the way the human hand can alter its apparent shape while moving, the inconsistencies in human motion and detection failures in the cluttered environment. Therefore, the system analyzes the list of coordinates to determine a path for the robot to move, by smoothing the data to reduce noise and looking for significant points used to determine the path the robot will ultimately take. The proposed system was applied to pairs of videos recording the motion of a human hand in a „real‟ environment to move the end-affector of a SCARA robot along the same path as the hand of the person in the video. The correctness of the robot motion was determined by observers indicating that motion of the robot appeared to match the motion of the video.

  12. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  13. A simple, inexpensive, and effective implementation of a vision-guided autonomous robot

    NASA Astrophysics Data System (ADS)

    Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James

    2006-10-01

    This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.

  14. An Online Change of Activity in Energy Spectrum for Detection on an Early Intervention Robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boudergui, K.; Laine, F.; Montagu, T.

    With the growth of industrial risks and the multiplication of CBRNe (Chemical Biological Radiological and explosive) attacks through toxic chemicals, biological or radiological threats, public services and military authorities face with increasingly critical situations, whose management is strongly conditioned by fast and reliable establishment of an informative diagnostic. Right after an attack, the five first minutes are crucial to define the various scenarios and the most dangerous for a human intervention. Therefore the use of robots is considered essential by all stakeholders of security. In this context, the SISPEO project (Systeme d'Intervention Sapeurs Pompiers Robotise) aims to create/build/design a robustmore » response through a robotic platform for early intervention services such as civil and military security in hostile environments. CEA LIST has proposed an adapted solution to detect and characterize nuclear and radiological risks online and in motion, using a miniature embedded CdZnTe (CZT) crystal Gamma-ray spectrometer. This paper presents experimental results for this miniature embedded CZT spectrometer and its associated mathematical method to detect and characterize radiological threats online and in motion. (authors)« less

  15. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.

    PubMed

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F

    2016-03-05

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

  16. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review

    PubMed Central

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.

    2016-01-01

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030

  17. Women's orgasm obstacles: A qualitative study.

    PubMed

    Nekoolaltak, Maryam; Keshavarz, Zohreh; Simbar, Masoumeh; Nazari, Ali Mohammad; Baghestani, Ahmad Reza

    2017-08-01

    Woman's orgasm plays a vital role in sexual compatibility and marital satisfaction. Orgasm in women is a learnable phenomenon that is influenced by several factors. The aim of this study is exploring obstacles to orgasm in Iranian married women. This qualitative study with directed content analysis approach was conducted in 2015-2016, on 20 Iranian married women who were individually interviewed at two medical clinics in Tehran, Iran. Orgasm obstacles were explored in one category, 4 subcategories, and 25 codes. The main category was "Multidimensionality of women's orgasm obstacles". Subcategories and some codes included: Physical obstacles (wife's or husband's boredom, vaginal infection, insufficient vaginal lubrication), psychological obstacles (lack of sexual knowledge, shame, lack of concentration on sex due to household and children problems), relational obstacles (husband's hurry, having a dispute and annoyance with spouse) and contextual obstacles (Irregular sleep hours, lack of privacy and inability to separate children's bedroom from their parents, lack of peace at home). For prevention or treatment of female orgasm disorders, attention to physical factors is not enough. Obtaining a comprehensive history about physical, psychological, relational and contextual dimensions of woman's life is necessary.

  18. Collision-free motion of two robot arms in a common workspace

    NASA Technical Reports Server (NTRS)

    Basta, Robert A.; Mehrotra, Rajiv; Varanasi, Murali R.

    1987-01-01

    Collision-free motion of two robot arms in a common workspace is investigated. A collision-free motion is obtained by detecting collisions along the preplanned trajectories using a sphere model for the wrist of each robot and then modifying the paths and/or trajectories of one or both robots to avoid the collision. Detecting and avoiding collisions are based on the premise that: preplanned trajectories of the robots follow a straight line; collisions are restricted to between the wrists of the two robots (which corresponds to the upper three links of PUMA manipulators); and collisions never occur between the beginning points or end points on the straight line paths. The collision detection algorithm is described and some approaches to collision avoidance are discussed.

  19. Object Detection Applied to Indoor Environments for Mobile Robot Navigation

    PubMed Central

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-01-01

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264

  20. Object Detection Applied to Indoor Environments for Mobile Robot Navigation.

    PubMed

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-07-28

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.

  1. JPL Robotics Technology Applicable to Agriculture

    NASA Technical Reports Server (NTRS)

    Udomkesmalee, Suraphol Gabriel; Kyte, L.

    2008-01-01

    This slide presentation describes several technologies that are developed for robotics that are applicable for agriculture. The technologies discussed are detection of humans to allow safe operations of autonomous vehicles, and vision guided robotic techniques for shoot selection, separation and transfer to growth media,

  2. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  3. Flexible Virtual Structure Consideration in Dynamic Modeling of Mobile Robots Formation

    NASA Astrophysics Data System (ADS)

    El Kamel, A. Essghaier; Beji, L.; Lerbet, J.; Abichou, A.

    2009-03-01

    In cooperative mobile robotics, we look for formation keeping and maintenance of a geometric configuration during movement. As a solution to these problems, the concept of a virtual structure is considered. Based on this idea, we have developed an efficient flexible virtual structure, describing the dynamic model of n vehicles in formation and where the whole formation is kept dependant. Notes that, for 2D and 3D space navigation, only a rigid virtual structure was proposed in the literature. Further, the problem was limited to a kinematic behavior of the structure. Hence, the flexible virtual structure in dynamic modeling of mobile robots formation presented in this paper, gives more capabilities to the formation to avoid obstacles in hostile environment while keeping formation and avoiding inter-agent collision.

  4. Real-Time Occupancy Change Analyzer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The Real-Time Occupancy Change Analyzer (ROCA) produces an occupancy grid map of an environment around the robot, scans the environment to generate a current obstacle map relative to a current robot position, and converts the current obstacle map to a current occupancy grid map. Changes in the occupancy grid can be reported in real time to support a number of tracking capabilities. The benefit of ROCA is that rather than only providing a vector to the detected change, it provides the actual x,y position of the change.

  5. Terrain classification in navigation of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Dodds, David R.

    1991-03-01

    In this paper we describe a method of path planning that integrates terrain classification (by means of fractals) the certainty grid method of spatial representation Kehtarnavaz Griswold collision-zones Dubois Prade fuzzy temporal and spatial knowledge and non-point sized qualitative navigational planning. An initially planned (" end-to-end" ) path is piece-wise modified to accommodate known and inferred moving obstacles and includes attention to time-varying multiple subgoals which may influence a section of path at a time after the robot has begun traversing that planned path.

  6. Automated Robot Movement in the Mapped Area Using Fuzzy Logic for Wheel Chair Application

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Efendi, S.; Ramadhana, H.; Andayani, U.; Fahmi, F.

    2018-03-01

    The difficulties of the disabled to move make them unable to live independently. People with disabilities need supporting device to move from place to place. For that, we proposed a solution that can help people with disabilities to move from one room to another automatically. This study aims to create a wheelchair prototype in the form of a wheeled robot as a means to learn the automatic mobilization. The fuzzy logic algorithm was used to determine motion direction based on initial position, ultrasonic sensors reading in avoiding obstacles, infrared sensors reading as a black line reader for the wheeled robot to move smooth and smartphone as a mobile controller. As a result, smartphones with the Android operating system can control the robot using Bluetooth. Here Bluetooth technology can be used to control the robot from a maximum distance of 15 meters. The proposed algorithm was able to work stable for automatic motion determination based on initial position, and also able to modernize the wheelchair movement from one room to another automatically.

  7. Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam

    2017-04-01

    The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.

  8. Obstacle-avoiding navigation system

    DOEpatents

    Borenstein, Johann; Koren, Yoram; Levine, Simon P.

    1991-01-01

    A system for guiding an autonomous or semi-autonomous vehicle through a field of operation having obstacles thereon to be avoided employs a memory for containing data which defines an array of grid cells which correspond to respective subfields in the field of operation of the vehicle. Each grid cell in the memory contains a value which is indicative of the likelihood, or probability, that an obstacle is present in the respectively associated subfield. The values in the grid cells are incremented individually in response to each scan of the subfields, and precomputation and use of a look-up table avoids complex trigonometric functions. A further array of grid cells is fixed with respect to the vehicle form a conceptual active window which overlies the incremented grid cells. Thus, when the cells in the active window overly grid cell having values which are indicative of the presence of obstacles, the value therein is used as a multiplier of the precomputed vectorial values. The resulting plurality of vectorial values are summed vectorially in one embodiment of the invention to produce a virtual composite repulsive vector which is then summed vectorially with a target-directed vector for producing a resultant vector for guiding the vehicle. In an alternative embodiment, a plurality of vectors surrounding the vehicle are computed, each having a value corresponding to obstacle density. In such an embodiment, target location information is used to select between alternative directions of travel having low associated obstacle densities.

  9. A self-paced motor imagery based brain-computer interface for robotic wheelchair control.

    PubMed

    Tsui, Chun Sing Louis; Gan, John Q; Hu, Huosheng

    2011-10-01

    This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.

  10. Lunar surface operations. Volume 3: Robotic arm for lunar surface vehicle

    NASA Technical Reports Server (NTRS)

    Shields, William; Feteih, Salah; Hollis, Patrick

    1993-01-01

    A robotic arm for a lunar surface vehicle that can help in handling cargo and equipment, and remove obstacles from the path of the vehicle is defined as a support to NASA's intention to establish a lunar based colony by the year 2010. Its mission would include, but not limited to the following: exploration, lunar sampling, replace and remove equipment, and setup equipment (e.g. microwave repeater stations). Performance objectives for the robotic arm include a reach of 3 m, accuracy of 1 cm, arm mass of 100 kg, and lifting capability of 50 kg. The end effectors must grip various sizes and shapes of cargo; push, pull, turn, lift, or lower various types of equipment; and clear a path on the lunar surface by shoveling, sweeping aside, or gripping the obstacle present in the desired path. The arm can safely complete a task within a reasonable amount of time; the actual time is dependent upon the task to be performed. The positioning of the arm includes a manual backup system such that the arm can be safely stored in case of failure. Remote viewing and proximity and positioning sensors are incorporated in the design of the arm. The following specific topic are addressed in this report: mission and requirements, system design and integration, mechanical structure, modified wrist, structure-to-end-effector interface, end-effectors, and system controls.

  11. Apparent motion perception in lower limb amputees with phantom sensations: "obstacle shunning" and "obstacle tolerance".

    PubMed

    Saetta, Gianluca; Grond, Ilva; Brugger, Peter; Lenggenhager, Bigna; Tsay, Anthony J; Giummarra, Melita J

    2018-03-21

    Phantom limbs are the phenomenal persistence of postural and sensorimotor features of an amputated limb. Although immaterial, their characteristics can be modulated by the presence of physical matter. For instance, the phantom may disappear when its phenomenal space is invaded by objects ("obstacle shunning"). Alternatively, "obstacle tolerance" occurs when the phantom is not limited by the law of impenetrability and co-exists with physical objects. Here we examined the link between this under-investigated aspect of phantom limbs and apparent motion perception. The illusion of apparent motion of human limbs involves the perception that a limb moves through or around an object, depending on the stimulus onset asynchrony (SOA) for the two images. Participants included 12 unilateral lower limb amputees matched for obstacle shunning (n = 6) and obstacle tolerance (n = 6) experiences, and 14 non-amputees. Using multilevel linear models, we replicated robust biases for short perceived trajectories for short SOA (moving through the object), and long trajectories (circumventing the object) for long SOAs in both groups. Importantly, however, amputees with obstacle shunning perceived leg stimuli to predominantly move through the object, whereas amputees with obstacle tolerance perceived leg stimuli to predominantly move around the object. That is, in people who experience obstacle shunning, apparent motion perception of lower limbs was not constrained to the laws of impenetrability (as the phantom disappears when invaded by objects), and legs can therefore move through physical objects. Amputees who experience obstacle tolerance, however, had stronger solidity constraints for lower limb apparent motion, perhaps because they must avoid co-location of the phantom with physical objects. Phantom limb experience does, therefore, appear to be modulated by intuitive physics, but not in the same way for everyone. This may have important implications for limb experience post

  12. Improved obstacle avoidance and navigation for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.

    2015-01-01

    This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.

  13. With the Development of Teaching Sumo Robot are Discussed

    NASA Astrophysics Data System (ADS)

    quan, Miao Zhi; Ke, Ma; Xin, Wei Jing

    In recent years, with of robot technology progress and robot science activities, robot technology obtained fast development. The system USES the Atmega128 single-chip Atmel company as a core controller, was designed using a infrared to tube detection boundary, looking for each other, controller to tube receiving infrared data, and according to the data control motor state thus robot reached automatic control purposes. Against robot by single-chip microcomputer smallest system, By making the teaching purpose is to promote the robot sumo students' interests and let more students to participate in the robot research activities.

  14. Exploring performance obstacles of intensive care nurses.

    PubMed

    Gurses, Ayse P; Carayon, Pascale

    2009-05-01

    High nursing workload, poor patient safety, and poor nursing quality of working life (QWL) are major issues in intensive care units (ICUs). Characteristics of the ICU and performance obstacles may contribute to these issues. The goal of this study was to comprehensively identify the performance obstacles perceived by ICU nurses. We used a qualitative research design and conducted semi-structured interviews with 15 ICU nurses of a medical-surgical ICU. Based on this qualitative study and a previously reported quantitative study, we identified seven main types of performance obstacles experienced by ICU nurses. Obstacles related to the physical environment (e.g., noise, amount of space), family relations (e.g., distractions caused by family, lack of time to spend with family), and equipment (e.g., unavailability, misplacement) were the most frequently experienced performance obstacles. The qualitative interview data provided rich information regarding the factors contributing to the performance obstacles. Overall, ICU nurses experience a variety of performance obstacles in their work on a daily basis. Future research is needed to understand the impact of performance obstacles on nursing workload, nursing QWL, and quality and safety of care.

  15. RoCoMAR: robots' controllable mobility aided routing and relay architecture for mobile sensor networks.

    PubMed

    Le, Duc Van; Oh, Hoon; Yoon, Seokhoon

    2013-07-05

    In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay.

  16. RoCoMAR: Robots' Controllable Mobility Aided Routing and Relay Architecture for Mobile Sensor Networks

    PubMed Central

    Van Le, Duc; Oh, Hoon; Yoon, Seokhoon

    2013-01-01

    In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay. PMID:23881134

  17. Stereo vision tracking of multiple objects in complex indoor environments.

    PubMed

    Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.

  18. Fusion of ultrasonic and infrared signatures for personnel detection by a mobile robot

    NASA Astrophysics Data System (ADS)

    Carroll, Matthew S.; Meng, Min; Cadwallender, William K.

    1992-04-01

    Passive Infrared sensors used for intrusion detection, especially those used on mobile robots, are vulnerable to false alarms caused by clutter objects such as radiators, steam pipes, windows, etc., as well as deliberately caused false alarms caused by decoy objects. To overcome these sources of false alarms, we are now combining thermal and ultrasonic signals, the results being a more robust system for detecting personnel. Our paper will discuss the fusion strategies used for combining sensor information. Our first strategy uses a statistical classifier using features such as the sonar cross-section, the received thermal energy, and ultrasonic range. Our second strategy uses s 3-layered neural classifier trained by backpropagation. The probability of correct classification and the false alarm rate for both strategies will be presented in the paper.

  19. Robotic CCD microscope for enhanced crystal recognition

    DOEpatents

    Segelke, Brent W.; Toppani, Dominique

    2007-11-06

    A robotic CCD microscope and procedures to automate crystal recognition. The robotic CCD microscope and procedures enables more accurate crystal recognition, leading to fewer false negative and fewer false positives, and enable detection of smaller crystals compared to other methods available today.

  20. Dynamics of spiral waves rotating around an obstacle and the existence of a minimal obstacle

    NASA Astrophysics Data System (ADS)

    Gao, Xiang; Feng, Xia; Li, Teng-Chao; Qu, Shixian; Wang, Xingang; Zhang, Hong

    2017-05-01

    Pinning of vortices by obstacles plays an important role in various systems. In the heart, anatomical reentry is created when a vortex, also known as the spiral wave, is pinned to an anatomical obstacle, leading to a class of physiologically very important arrhythmias. Previous analyses of its dynamics and instability provide fine estimates in some special circumstances, such as large obstacles or weak excitabilities. Here, to expand theoretical analyses to all circumstances, we propose a general theory whose results quantitatively agree with direct numerical simulations. In particular, when obstacles are small and pinned spiral waves are destabilized, an accurate explanation of the instability in two-dimensional media is provided by the usage of a mapping rule and dimension reduction. The implications of our results are to better understand the mechanism of arrhythmia and thus improve its early prevention.

  1. Measures for simulator evaluation of a helicopter obstacle avoidance system

    NASA Technical Reports Server (NTRS)

    Demaio, Joe; Sharkey, Thomas J.; Kennedy, David; Hughes, Micheal; Meade, Perry

    1993-01-01

    The U.S. Army Aeroflightdynamics Directorate (AFDD) has developed a high-fidelity, full-mission simulation facility for the demonstration and evaluation of advanced helicopter mission equipment. The Crew Station Research and Development Facility (CSRDF) provides the capability to conduct one- or two-crew full-mission simulations in a state-of-the-art helicopter simulator. The CSRDF provides a realistic, full field-of-regard visual environment with simulation of state-of-the-art weapons, sensors, and flight control systems. We are using the CSRDF to evaluate the ability of an obstacle avoidance system (OASYS) to support low altitude flight in cluttered terrain using night vision goggles (NVG). The OASYS uses a laser radar to locate obstacles to safe flight in the aircraft's flight path. A major concern is the detection of wires, which can be difficult to see with NVG, but other obstacles--such as trees, poles or the ground--are also a concern. The OASYS symbology is presented to the pilot on a head-up display mounted on the NVG (NVG-HUD). The NVG-HUD presents head-stabilized symbology to the pilot while allowing him to view the image intensified, out-the-window scene through the HUD. Since interference with viewing through the display is a major concern, OASYS symbology must be designed to present usable obstacle clearance information with a minimum of clutter.

  2. Electronic system for floor surface type detection in robotics applications

    NASA Astrophysics Data System (ADS)

    Tarapata, Grzegorz; Paczesny, Daniel; Tarasiuk, Łukasz

    2016-11-01

    The paper reports a recognizing method base on ultrasonic transducers utilized for the surface types detection. Ultra-sonic signal is transmitted toward the examined substrate, then reflected and scattered signal goes back to another ultra-sonic receiver. Thee measuring signal is generated by a piezo-electric transducer located at specified distance from the tested substrate. The detector is a second piezo-electric transducer located next to the transmitter. Depending on thee type of substrate which is exposed by an ultrasonic wave, the signal is partially absorbed inn the material, diffused and reflected towards the receiver. To measure the level of received signal, the dedicated electronic circuit was design and implemented in the presented systems. Such system was designed too recognize two types of floor surface: solid (like concrete, ceramic stiles, wood) and soft (carpets, floor coverings). The method will be applied in electronic detection system dedicated to autonomous cleaning robots due to selection of appropriate cleaning method. This work presents the concept of ultrasonic signals utilization, the design of both the measurement system and the measuring stand and as well number of wide tests results which validates correctness of applied ultrasonic method.

  3. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.

  4. Robot companions and ethics a pragmatic approach of ethical design.

    PubMed

    Cornet, Gérard

    2013-12-01

    From his experience as ethical expert for two Robot Companion prototype projects aiming at empowering older MCI persons to remain at home and to support their family carers, Gerard Cornet, Gerontologist, review the ethical rules, principles and pragmatic approaches in different cultures. The ethical process of these two funded projects, one European, Companionable (FP7 e-inclusion call1), the other French, Quo vadis (ANR tecsan) are described from the inclusion of the targeted end users in the process, to the assessment and ranking of their main needs and whishes to design the specifications, test the performance expected. Obstacles to turn round and limits for risks evaluation (directs or implicit), acceptability, utility, respect of intimacy and dignity, and balance with freedom and security and frontiers to artificial intelligence are discussed As quoted in the discussion with the French and Japanese experts attending the Toulouse Robotics and medicine symposium (March 26th 2011), the need of a new ethical approach, going further the present ethical rules is needed for the design and social status of ethical robots, having capacity cas factor of progress and global quality of innovation design in an ageing society.

  5. Surface obstacles in pulsatile flow

    NASA Astrophysics Data System (ADS)

    Carr, Ian A.; Plesniak, Michael W.

    2017-11-01

    Flows past obstacles mounted on flat surfaces have been widely studied due to their ubiquity in nature and engineering. For nearly all of these studies, the freestream flow over the obstacle was steady, i.e., constant velocity, unidirectional flow. Unsteady, pulsatile flows occur frequently in biology, geophysics, biomedical engineering, etc. Our study is aimed at extending the comprehensive knowledge base that exists for steady flows to considerably more complex pulsatile flows. Characterizing the vortex and wake dynamics of flows around surface obstacles embedded in pulsatile flows can provide insights into the underlying physics in all wake and junction flows. In this study, we experimentally investigate the wake of two canonical obstacles: a cube and a circular cylinder with an aspect ratio of unity. Our previous studies of a surface-mounted hemisphere in pulsatile flow are used as a baseline for these two new, more complex geometries. Phase-averaged PIV and hot-wire anemometry are used to characterize the dynamics of coherent structures in the wake and at the windward junction of the obstacles. Complex physics occur during the deceleration phase of the pulsatile inflow. We propose a framework for understanding these physics based on self-induced vortex propagation, similar to the phenomena exhibited by vortex rings.

  6. Two-Armed, Mobile, Sensate Research Robot

    NASA Technical Reports Server (NTRS)

    Engelberger, J. F.; Roberts, W. Nelson; Ryan, David J.; Silverthorne, Andrew

    2004-01-01

    The Anthropomorphic Robotic Testbed (ART) is an experimental prototype of a partly anthropomorphic, humanoid-size, mobile robot. The basic ART design concept provides for a combination of two-armed coordination, tactility, stereoscopic vision, mobility with navigation and avoidance of obstacles, and natural-language communication, so that the ART could emulate humans in many activities. The ART could be developed into a variety of highly capable robotic assistants for general or specific applications. There is especially great potential for the development of ART-based robots as substitutes for live-in health-care aides for home-bound persons who are aged, infirm, or physically handicapped; these robots could greatly reduce the cost of home health care and extend the term of independent living. The ART is a fully autonomous and untethered system. It includes a mobile base on which is mounted an extensible torso topped by a head, shoulders, and two arms. All subsystems of the ART are powered by a rechargeable, removable battery pack. The mobile base is a differentially- driven, nonholonomic vehicle capable of a speed >1 m/s and can handle a payload >100 kg. The base can be controlled manually, in forward/backward and/or simultaneous rotational motion, by use of a joystick. Alternatively, the motion of the base can be controlled autonomously by an onboard navigational computer. By retraction or extension of the torso, the head height of the ART can be adjusted from 5 ft (1.5 m) to 6 1/2 ft (2 m), so that the arms can reach either the floor or high shelves, or some ceilings. The arms are symmetrical. Each arm (including the wrist) has a total of six rotary axes like those of the human shoulder, elbow, and wrist joints. The arms are actuated by electric motors in combination with brakes and gas-spring assists on the shoulder and elbow joints. The arms are operated under closed-loop digital control. A receptacle for an end effector is mounted on the tip of the wrist and

  7. Robotics in Lower-Limb Rehabilitation after Stroke

    PubMed Central

    2017-01-01

    With the increase in the elderly, stroke has become a common disease, often leading to motor dysfunction and even permanent disability. Lower-limb rehabilitation robots can help patients to carry out reasonable and effective training to improve the motor function of paralyzed extremity. In this paper, the developments of lower-limb rehabilitation robots in the past decades are reviewed. Specifically, we provide a classification, a comparison, and a design overview of the driving modes, training paradigm, and control strategy of the lower-limb rehabilitation robots in the reviewed literature. A brief review on the gait detection technology of lower-limb rehabilitation robots is also presented. Finally, we discuss the future directions of the lower-limb rehabilitation robots. PMID:28659660

  8. Robotics in Lower-Limb Rehabilitation after Stroke.

    PubMed

    Zhang, Xue; Yue, Zan; Wang, Jing

    2017-01-01

    With the increase in the elderly, stroke has become a common disease, often leading to motor dysfunction and even permanent disability. Lower-limb rehabilitation robots can help patients to carry out reasonable and effective training to improve the motor function of paralyzed extremity. In this paper, the developments of lower-limb rehabilitation robots in the past decades are reviewed. Specifically, we provide a classification, a comparison, and a design overview of the driving modes, training paradigm, and control strategy of the lower-limb rehabilitation robots in the reviewed literature. A brief review on the gait detection technology of lower-limb rehabilitation robots is also presented. Finally, we discuss the future directions of the lower-limb rehabilitation robots.

  9. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation

    PubMed Central

    Scarfe, Amy C.; Moore, Brian C. J.; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound. PMID:28407000

  10. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    PubMed

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  11. Obstacles for Teachers to Integrate Technology with Instruction

    ERIC Educational Resources Information Center

    Alenezi, Abdullah

    2017-01-01

    This paper covers type one and two educational obstacles in using technology in the classrooms, and considering those obstacles tries to find the answer to the following overarching research question which can help to gauge some obstacles for the educational technology integration for elementary and high school education: What obstacles do…

  12. The MITy micro-rover: Sensing, control, and operation

    NASA Technical Reports Server (NTRS)

    Malafeew, Eric; Kaliardos, William

    1994-01-01

    The sensory, control, and operation systems of the 'MITy' Mars micro-rover are discussed. It is shown that the customized sun tracker and laser rangefinder provide internal, autonomous dead reckoning and hazard detection in unstructured environments. The micro-rover consists of three articulated platforms with sensing, processing and payload subsystems connected by a dual spring suspension system. A reactive obstacle avoidance routine makes intelligent use of robot-centered laser information to maneuver through cluttered environments. The hazard sensors include a rangefinder, inclinometers, proximity sensors and collision sensors. A 486/66 laptop computer runs the graphical user interface and programming environment. A graphical window displays robot telemetry in real time and a small TV/VCR is used for real time supervisory control. Guidance, navigation, and control routines work in conjunction with the mapping and obstacle avoidance functions to provide heading and speed commands that maneuver the robot around obstacles and towards the target.

  13. Robot navigation research using the HERMIES mobile robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, D.L.

    1989-01-01

    In recent years robot navigation has attracted much attention from researchers around the world. Not only are theoretical studies being simulated on sophisticated computers, but many mobile robots are now used as test vehicles for these theoretical studies. Various algorithms have been perfected for navigation in a known static environment; but navigation in an unknown and dynamic environment poses a much more challenging problem for researchers. Many different methodologies have been developed for autonomous robot navigation, but each methodology is usually restricted to a particular type of environment. One important research focus of the Center for Engineering Systems Advanced researchmore » (CESAR) at Oak Ridge National Laboratory, is autonomous navigation in unknown and dynamic environments using the series of HERMIES mobile robots. The research uses an expert system for high-level planning interfaced with C-coded routines for implementing the plans, and for quick processing of data requested by the expert system. In using this approach, the navigation is not restricted to one methodology since the expert system can activate a rule module for the methodology best suited for the current situation. Rule modules can be added the rule base as they are developed and tested. Modules are being developed or enhanced for navigating from a map, searching for a target, exploring, artificial potential-field navigation, navigation using edge-detection, etc. This paper will report on the various rule modules and methods of navigation in use, or under development at CESAR, using the HERMIES-IIB robot as a testbed. 13 refs., 5 figs., 1 tab.« less

  14. Online optimal obstacle avoidance for rotary-wing autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kang, Keeryun

    This thesis presents an integrated framework for online obstacle avoidance of rotary-wing unmanned aerial vehicles (UAVs), which can provide UAVs an obstacle field navigation capability in a partially or completely unknown obstacle-rich environment. The framework is composed of a LIDAR interface, a local obstacle grid generation, a receding horizon (RH) trajectory optimizer, a global shortest path search algorithm, and a climb rate limit detection logic. The key feature of the framework is the use of an optimization-based trajectory generation in which the obstacle avoidance problem is formulated as a nonlinear trajectory optimization problem with state and input constraints over the finite range of the sensor. This local trajectory optimization is combined with a global path search algorithm which provides a useful initial guess to the nonlinear optimization solver. Optimization is the natural process of finding the best trajectory that is dynamically feasible, safe within the vehicle's flight envelope, and collision-free at the same time. The optimal trajectory is continuously updated in real time by the numerical optimization solver, Nonlinear Trajectory Generation (NTG), which is a direct solver based on the spline approximation of trajectory for dynamically flat systems. In fact, the overall approach of this thesis to finding the optimal trajectory is similar to the model predictive control (MPC) or the receding horizon control (RHC), except that this thesis followed a two-layer design; thus, the optimal solution works as a guidance command to be followed by the controller of the vehicle. The framework is implemented in a real-time simulation environment, the Georgia Tech UAV Simulation Tool (GUST), and integrated in the onboard software of the rotary-wing UAV test-bed at Georgia Tech. Initially, the 2D vertical avoidance capability of real obstacles was tested in flight. The flight test evaluations were extended to the benchmark tests for 3D avoidance

  15. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System

    PubMed Central

    Beruvides, Gerardo

    2017-01-01

    Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors’ knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions. PMID:28906450

  16. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System.

    PubMed

    Castaño, Fernando; Beruvides, Gerardo; Haber, Rodolfo E; Artuñedo, Antonio

    2017-09-14

    Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors' knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions.

  17. Probabilistic Verification of Multi-Robot Missions in Uncertain Environments

    DTIC Science & Technology

    2015-11-01

    has been used to measure the environment, including any dynamic obstacles. However, no matter how the model originates, this approach is based on...modeled as bivariate Gaussian distributions and estimated by calibration measurements . The Robot process model is described in prior work [13...sn〉 (pR,pE)(obR) = In〈pR〉〈p〉 ; In〈pE〉〈e〉 ; ( Gtr〈 d(p,e), sr〉〈p1〉 ; Out〈obR,p1〉 | Lte 〈 d(p,e), sr〉〈p2〉 ; Out〈obR, sn+p2 〉 ) ; Sensors

  18. Planning Process Obstacles and Opportunities.

    ERIC Educational Resources Information Center

    Doyle, Patricia C.

    1997-01-01

    Argues that obstacles exist in the Public Library Association (PLA) planning process that can be resolved by developing relationships between materials in the collection and PLA roles. Proposes changes to help reduce conflict in the PLA planning process and discusses process obstacles, relationships as the key providing better service, and…

  19. Surface obstacles in pulsatile flow

    NASA Astrophysics Data System (ADS)

    Carr, Ian A.; Plesniak, Michael W.

    2016-11-01

    Flows past obstacles mounted on flat surfaces have been widely studied due to their ubiquity in nature and engineering. For nearly all of these studies, the freestream flow over the obstacle was steady, i.e. constant velocity unidirectional flow. Unsteady, pulsatile flows occur frequently in biology, geophysics, biomedical engineering, etc. Our study is aimed at extending the comprehensive knowledge base that exists for steady flows to considerably more complex pulsatile flows. Beyond the important practical applications, characterizing the vortex and wake dynamics of flows around surface obstacles embedded in pulsatile flows can provide insights into the underlying physics in all wake and junction flows. In this study, we experimentally investigated the wake of four canonical surface obstacles: hemisphere, cube, and circular cylinders with aspect ratio of 1:1 and 2:1. Phase-averaged PIV and hot-wire anemometry are used to characterize the dynamics of coherent structures in the wake and at the windward junction of the obstacles. Complex physics occur during the deceleration phase of the pulsatile inflow. We propose a framework for understanding these physics based on self-induced vortex propagation, similar to the phenomena exhibited by vortex rings. This material is based in part upon work supported by the National Science Foundation under Grant Number CBET-1236351, and GW Centeor Biomimetics and Bioinspired Engineering (COBRE).

  20. Robotic Enrichment Processing of Roche 454 Titanium Emlusion PCR at the DOE Joint Genome Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Matthew; Wilson, Steven; Bauer, Diane

    2010-05-28

    Enrichment of emulsion PCR product is the most laborious and pipette-intensive step in the 454 Titanium process, posing the biggest obstacle for production-oriented scale up. The Joint Genome Institute has developed a pair of custom-made robots based on the Microlab Star liquid handling deck manufactured by Hamilton to mediate the complexity and ergonomic demands of the 454 enrichment process. The robot includes a custom built centrifuge, magnetic deck positions, as well as heating and cooling elements. At present processing eight emulsion cup samples in a single 2.5 hour run, these robots are capable of processing up to 24 emulsion cupmore » samples. Sample emulsions are broken using the standard 454 breaking process and transferred from a pair of 50ml conical tubes to a single 2ml tube and loaded on the robot. The robot performs the enrichment protocol and produces beads in 2ml tubes ready for counting. The robot follows the Roche 454 enrichment protocol with slight exceptions to the manner in which it resuspends beads via pipette mixing rather than vortexing and a set number of null bead removal washes. The robotic process is broken down in similar discrete steps: First Melt and Neutralization, Enrichment Primer Annealing, Enrichment Bead Incubation, Null Bead Removal, Second Melt and Neutralization and Sequencing Primer Annealing. Data indicating our improvements in enrichment efficiency and total number of bases per run will also be shown.« less

  1. Laser-based pedestrian tracking in outdoor environments by multiple mobile robots.

    PubMed

    Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko

    2012-10-29

    This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures.

  2. Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots

    PubMed Central

    Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko

    2012-01-01

    This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. PMID:23202171

  3. Forming Human-Robot Teams Across Time and Space

    NASA Technical Reports Server (NTRS)

    Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.

    2012-01-01

    NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot

  4. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our

  5. Thermal tracking in mobile robots for leak inspection activities.

    PubMed

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-10-09

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.

  6. Thermal Tracking in Mobile Robots for Leak Inspection Activities

    PubMed Central

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-01-01

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684

  7. Shock interaction behind a pair of cylindrical obstacles

    NASA Astrophysics Data System (ADS)

    Liu, Heng; Mazumdar, Raoul; Eliasson, Veronica

    2014-11-01

    The body of work focuses on two-dimensional numerical simulations of shock interaction with a pair of cylindrical obstacles, varying the obstacle separation and incident shock strength. With the shock waves propagating parallel to the center-line between the two cylindrical obstacles, the shock strengths simulated vary from a Mach of 1.4 to a Mach of 2.4, against a wide range of obstacle separation distance to their diameters. These cases are simulated via a software package called Overture, which is used to solve the inviscid Euler equations of gas dynamics on overlapping grids with adaptive mesh refinement. The goal of these cases is to find a so-called ``safe'' region for obstacle spacing and varying shock Mach numbers, such that the pressure in the ``safe'' region is reduced downstream of the obstacles. The benefits apply to both building and armor design for the purpose of shock wave mitigation to keep humans and equipment safe. The results obtained from the simulations confirm that the length of the ``safe'' region and the degree of shock wave attenuation depend on the ratio of obstacle separation distance to obstacle diameter. The influence of various Mach number is also discussed.

  8. Learning classifier systems for single and multiple mobile robots in unstructured environments

    NASA Astrophysics Data System (ADS)

    Bay, John S.

    1995-12-01

    The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.

  9. System of launchable mesoscale robots for distributed sensing

    NASA Astrophysics Data System (ADS)

    Yesin, Kemal B.; Nelson, Bradley J.; Papanikolopoulos, Nikolaos P.; Voyles, Richard M.; Krantz, Donald G.

    1999-08-01

    A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.

  10. HiMoP: A three-component architecture to create more human-acceptable social-assistive robots : Motivational architecture for assistive robots.

    PubMed

    Rodríguez-Lera, Francisco J; Matellán-Olivera, Vicente; Conde-González, Miguel Á; Martín-Rico, Francisco

    2018-05-01

    Generation of autonomous behavior for robots is a general unsolved problem. Users perceive robots as repetitive tools that do not respond to dynamic situations. This research deals with the generation of natural behaviors in assistive service robots for dynamic domestic environments, particularly, a motivational-oriented cognitive architecture to generate more natural behaviors in autonomous robots. The proposed architecture, called HiMoP, is based on three elements: a Hierarchy of needs to define robot drives; a set of Motivational variables connected to robot needs; and a Pool of finite-state machines to run robot behaviors. The first element is inspired in Alderfer's hierarchy of needs, which specifies the variables defined in the motivational component. The pool of finite-state machine implements the available robot actions, and those actions are dynamically selected taking into account the motivational variables and the external stimuli. Thus, the robot is able to exhibit different behaviors even under similar conditions. A customized version of the "Speech Recognition and Audio Detection Test," proposed by the RoboCup Federation, has been used to illustrate how the architecture works and how it dynamically adapts and activates robots behaviors taking into account internal variables and external stimuli.

  11. Robotics

    NASA Astrophysics Data System (ADS)

    Popov, E. P.; Iurevich, E. I.

    The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.

  12. Strategies for obstacle avoidance during walking in the cat.

    PubMed

    Chu, Kevin M I; Seto, Sandy H; Beloozerova, Irina N; Marlinski, Vladimir

    2017-08-01

    Avoiding obstacles is essential for successful navigation through complex environments. This study aimed to clarify what strategies are used by a typical quadruped, the cat, to avoid obstacles during walking. Four cats walked along a corridor 2.5 m long and 25 or 15 cm wide. Obstacles, small round objects 2.5 cm in diameter and 1 cm in height, were placed on the floor in various locations. Movements of the paw were recorded with a motion capture and analysis system (Visualeyez, PTI). During walking in the wide corridor, cats' preferred strategy for avoiding a single obstacle was circumvention, during which the stride direction changed while stride duration and swing-to-stride duration ratio were preserved. Another strategy, stepping over the obstacle, was used during walking in the narrow corridor, when lateral deviations of walking trajectory were restricted. Stepping over the obstacle involved changes in two consecutive strides. The stride preceding the obstacle was shortened, and swing-to-stride ratio was reduced. The obstacle was negotiated in the next stride of increased height and normal duration and swing-to-stride ratio. During walking on a surface with multiple obstacles, both strategies were used. To avoid contact with the obstacle, cats placed the paw away from the object at a distance roughly equal to the diameter of the paw. During obstacle avoidance cats prefer to alter muscle activities without altering the locomotor rhythm. We hypothesize that a choice of the strategy for obstacle avoidance is determined by minimizing the complexity of neuro-motor processes required to achieve the behavioral goal. NEW & NOTEWORTHY In a study of feline locomotor behavior we found that the preferred strategy to avoid a small obstacle is circumvention. During circumvention, stride direction changes but length and temporal structure are preserved. Another strategy, stepping over the obstacle, is used in narrow walkways. During overstepping, two strides adjust. A stride

  13. Speeded Reaching Movements around Invisible Obstacles

    PubMed Central

    Hudson, Todd E.; Wolfe, Uta; Maloney, Laurence T.

    2012-01-01

    We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain) using the Dominance Test employed in Hudson et al. (2007). The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions. PMID:23028276

  14. Paralyzed subject controls telepresence mobile robot using novel sEMG brain-computer interface: case study.

    PubMed

    Lyons, Kenneth R; Joshi, Sanjay S

    2013-06-01

    Here we demonstrate the use of a new singlesignal surface electromyography (sEMG) brain-computer interface (BCI) to control a mobile robot in a remote location. Previous work on this BCI has shown that users are able to perform cursor-to-target tasks in two-dimensional space using only a single sEMG signal by continuously modulating the signal power in two frequency bands. Using the cursor-to-target paradigm, targets are shown on the screen of a tablet computer so that the user can select them, commanding the robot to move in different directions for a fixed distance/angle. A Wifi-enabled camera transmits video from the robot's perspective, giving the user feedback about robot motion. Current results show a case study with a C3-C4 spinal cord injury (SCI) subject using a single auricularis posterior muscle site to navigate a simple obstacle course. Performance metrics for operation of the BCI as well as completion of the telerobotic command task are developed. It is anticipated that this noninvasive and mobile system will open communication opportunities for the severely paralyzed, possibly using only a single sensor.

  15. Vision based object pose estimation for mobile robots

    NASA Technical Reports Server (NTRS)

    Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry

    1994-01-01

    Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.

  16. Controlling multiple security robots in a warehouse environment

    NASA Technical Reports Server (NTRS)

    Everett, H. R.; Gilbreath, G. A.; Heath-Pastore, T. A.; Laird, R. T.

    1994-01-01

    The Naval Command Control and Ocean Surveillance Center (NCCOSC) has developed an architecture to provide coordinated control of multiple autonomous vehicles from a single host console. The multiple robot host architecture (MRHA) is a distributed multiprocessing system that can be expanded to accommodate as many as 32 robots. The initial application will employ eight Cybermotion K2A Navmaster robots configured as remote security platforms in support of the Mobile Detection Assessment and Response System (MDARS) Program. This paper discusses developmental testing of the MRHA in an operational warehouse environment, with two actual and four simulated robotic platforms.

  17. Bioinspired legged-robot based on large deformation of flexible skeleton.

    PubMed

    Mayyas, Mohammad

    2014-11-11

    In this article we present STARbot, a bioinspired legged robot capable of multiple locomotion modalities by using large deformation of its skeleton. We construct STARbot by using origami-style folding of flexible laminates. The long-term goal is to provide a robotic platform with maximum mobility on multiple surfaces. This paper particularly studies the quasistatic model of STARbot's leg under different conditions. We describe the large elastic deformation of a leg under external force, payload, and friction by using a set of non-dimensional, nonlinear approximate equations. We developed a test mechanism that models the motion of a leg in STARbot. We augmented several foot shapes and then tested them on soft to rough grounds. Both simulation and experimental findings were in good agreement. We utilized the model to develop several scales of tri and quad STARbot. We demonstrated the capability of these robots to locomote by combining their leg deformations with their foot motions. The combination provided a design platform for an active suspension STARbot with controlled foot locomotion. This included the ability of STARbot to change size, run over obstacles, walk and slide. Furthermore, in this paper we discuss a cost effective manufacturing and production method for manufacturing STARbot.

  18. Autonomous Robotic Inspection in Tunnels

    NASA Astrophysics Data System (ADS)

    Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.

    2016-06-01

    In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.

  19. Slip detection with accelerometer and tactile sensors in a robotic hand model

    NASA Astrophysics Data System (ADS)

    Al-Shanoon, Abdulrahman Abdulkareem S.; Anom Ahmad, Siti; Hassan, Mohd. Khair b.

    2015-11-01

    Grasp planning is an interesting issue in studies that dedicated efforts to investigate tactile sensors. This study investigated the physical force interaction between a tactile pressure sensor and a particular object. It also characterized object slipping during gripping operations and presented secure regripping of an object. Acceleration force was analyzed using an accelerometer sensor to establish a completely autonomous robotic hand model. An automatic feedback control system was applied to regrip the particular object when it commences to slip. Empirical findings were presented in consideration of the detection and subsequent control of the slippage situation. These findings revealed the correlation between the distance of the object slipping and the required force to regrip the object safely. This approach is similar to Hooke's law formula.

  20. Cooperative Three-Robot System for Traversing Steep Slopes

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terrance; Aghazarian, Hrand; Younse, Paulo; Garrett, Michael

    2009-01-01

    from all three robots for decision- making at each step, and to control the physical connections among the robots. In addition, TRESSA (as in prior systems that have utilized this architecture) , incorporates a capability for deterministic response to unanticipated situations from yet another architecture reported in Control Architecture for Robotic Agent Command and Sensing (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40. Tether tension control is a major consideration in the design and operation of TRESSA. Tension is measured by force sensors connected to each tether at the Cliffbot. The direction of the tension (both azimuth and elevation) is also measured. The tension controller combines a controller to counter gravitational force and an optional velocity controller that anticipates the motion of the Cliffbot. The gravity controller estimates the slope angle from the inclination of the tethers. This angle and the weight of the Cliffbot determine the total tension needed to counteract the weight of the Cliffbot. The total needed tension is broken into components for each Anchorbot. The difference between this needed tension and the tension measured at the Cliffbot constitutes an error signal that is provided to the gravity controller. The velocity controller computes the tether speed needed to produce the desired motion of the Cliffbot. Another major consideration in the design and operation of TRESSA is detection of faults. Each robot in the TRESSA system monitors its own performance and the performance of its teammates in order to detect any system faults and prevent unsafe conditions. At startup, communication links are tested and if any robot is not communicating, the system refuses to execute any motion commands. Prior to motion, the Anchorbots attempt to set tensions in the tethers at optimal levels for counteracting the weight of the Cliffbot; if either Anchorbot fails to reach its optimal tension level within a specified time, it sends