Sample records for robot navigation system

  1. Robot navigation research at CESAR (Center for Engineering Systems Advanced Research)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, D.L.; de Saussure, G.; Pin, F.G.

    1989-01-01

    A considerable amount of work has been reported on the problem of robot navigation in known static terrains. Algorithms have been proposed and implemented to search for an optimum path to the goal, taking into account the finite size and shape of the robot. Not as much work has been reported on robot navigation in unknown, unstructured, or dynamic environments. A robot navigating in an unknown environment must explore with its sensors, construct an abstract representation of its global environment to plan a path to the goal, and update or revise its plan based on accumulated data obtained and processedmore » in real-time. The core of the navigation program for the CESAR robots is a production system developed on the expert-system-shell CLIPS which runs on an NCUBE hypercube on board the robot. The production system can call on C-compiled navigation procedures. The production rules can read the sensor data and address the robot's effectors. This architecture was found efficient and flexible for the development and testing of the navigation algorithms; however, in order to process intelligently unexpected emergencies, it was found necessary to be able to control the production system through externally generated asynchronous data. This led to the design of a new asynchronous production system, APS, which is now being developed on the robot. This paper will review some of the navigation algorithms developed and tested at CESAR and will discuss the need for the new APS and how it is being integrated into the robot architecture. 18 refs., 3 figs., 1 tab.« less

  2. Tandem mobile robot system

    DOEpatents

    Buttz, James H.; Shirey, David L.; Hayward, David R.

    2003-01-01

    A robotic vehicle system for terrain navigation mobility provides a way to climb stairs, cross crevices, and navigate across difficult terrain by coupling two or more mobile robots with a coupling device and controlling the robots cooperatively in tandem.

  3. Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, W.J.; Chun, W.H.

    1990-01-01

    The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less

  4. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition

    PubMed Central

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953

  5. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.

  6. Robot navigation research using the HERMIES mobile robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, D.L.

    1989-01-01

    In recent years robot navigation has attracted much attention from researchers around the world. Not only are theoretical studies being simulated on sophisticated computers, but many mobile robots are now used as test vehicles for these theoretical studies. Various algorithms have been perfected for navigation in a known static environment; but navigation in an unknown and dynamic environment poses a much more challenging problem for researchers. Many different methodologies have been developed for autonomous robot navigation, but each methodology is usually restricted to a particular type of environment. One important research focus of the Center for Engineering Systems Advanced researchmore » (CESAR) at Oak Ridge National Laboratory, is autonomous navigation in unknown and dynamic environments using the series of HERMIES mobile robots. The research uses an expert system for high-level planning interfaced with C-coded routines for implementing the plans, and for quick processing of data requested by the expert system. In using this approach, the navigation is not restricted to one methodology since the expert system can activate a rule module for the methodology best suited for the current situation. Rule modules can be added the rule base as they are developed and tested. Modules are being developed or enhanced for navigating from a map, searching for a target, exploring, artificial potential-field navigation, navigation using edge-detection, etc. This paper will report on the various rule modules and methods of navigation in use, or under development at CESAR, using the HERMIES-IIB robot as a testbed. 13 refs., 5 figs., 1 tab.« less

  7. Design of a laser navigation system for the inspection robot used in substation

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Sun, Yanhe; Sun, Deli

    2017-01-01

    Aimed at the deficiency of the magnetic guide and RFID parking system used by substation inspection robot now, a laser navigation system is designed, and the system structure, the method of map building and positioning are all introduced. The system performance is tested in a 500kV substation, and the result show that the repetitive precision of navigation system is precise enough to help the robot fulfill inspection tasks.

  8. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments

    PubMed Central

    Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori

    2017-01-01

    Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803

  9. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments.

    PubMed

    Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori

    2017-08-15

    Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.

  10. Robotic platform for traveling on vertical piping network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel

    This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.

  11. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    NASA Astrophysics Data System (ADS)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  12. Computer-assisted hip and knee arthroplasty. Navigation and active robotic systems: an evidence-based analysis.

    PubMed

    2004-01-01

    The Medical Advisory Secretariat undertook a review of the evidence on the effectiveness and cost-effectiveness of computer assisted hip and knee arthroplasty. The two computer assisted arthroplasty systems that are the topics of this review are (1) navigation and (2) robotic-assisted hip and knee arthroplasty. Computer-assisted arthroplasty consists of navigation and robotic systems. Surgical navigation is a visualization system that provides positional information about surgical tools or implants relative to a target bone on a computer display. Most of the navigation-assisted arthroplasty devices that are the subject of this review are licensed by Health Canada. Robotic systems are active robots that mill bone according to information from a computer-assisted navigation system. The robotic-assisted arthroplasty devices that are the subject of this review are not currently licensed by Health Canada. The Cochrane and International Network of Agencies for Health Technology Assessment databases did not identify any health technology assessments on navigation or robotic-assisted hip or knee arthroplasty. The MEDLINE and EMBASE databases were searched for articles published between January 1, 1996 and November 30, 2003. This search produced 367 studies, of which 9 met the inclusion criteria. NAVIGATION-ASSISTED ARTHROPLASTY: Five studies were identified that examined navigation-assisted arthroplasty.A Level 1 evidence study from Germany found a statistically significant difference in alignment and angular deviation between navigation-assisted and free-hand total knee arthroplasty in favour of navigation-assisted surgery. However, the endpoints in this study were short-term. To date, the long-term effects (need for revision, implant longevity, pain, functional performance) are unknown.(1)A Level 2 evidence short-term study found that navigation-assisted total knee arthroplasty was significantly better than a non-navigated procedure for one of five postoperative measured angles.(2)A Level 2 evidence short-term study found no statistically significant difference in the variation of the abduction angle between navigation-assisted and conventional total hip arthroplasty.(3)Level 3 evidence observational studies of navigation-assisted total knee arthroplasty and unicompartmental knee arthroplasty have been conducted. Two studies reported that "the follow-up of the navigated prostheses is currently too short to know if clinical outcome or survival rates are improved. Longer follow-up is required to determine the respective advantages and disadvantages of both techniques."(4;5) ROBOTIC-ASSISTED ARTHROPLASTY: Four studies were identified that examined robotic-assisted arthroplasty.A Level 1 evidence study revealed that there was no statistically significant difference between functional hip scores at 24 months post implantation between patients who underwent robotic-assisted primary hip arthroplasty and those that were treated with manual implantation.(6)Robotic-assisted arthroplasty had advantages in terms of preoperative planning and the accuracy of the intraoperative procedure.(6)Patients who underwent robotic-assisted hip arthroplasty had a higher dislocation rate and more revisions.(6)Robotic-assisted arthroplasty may prove effective with certain prostheses (e.g., anatomic) because their use may result in less muscle detachment.(6)An observational study (Level 3 evidence) found that the incidence of severe embolic events during hip relocation was lower with robotic arthroplasty than with manual surgery.(7)An observational study (Level 3 evidence) found that there was no significant difference in gait analyses of patients who underwent robotic-assisted total hip arthroplasty using robotic surgery compared to patients who were treated with conventional cementless total hip arthroplasty.(8)An observational study (Level 3 evidence) compared outcomes of total knee arthroplasty between patients undergoing robotic surgery and patients who were historical controls. Brief, qualitative results suggested that there was much broader variation of angles after manual total knee arthroplasty compared to the robotic technique and that there was no difference in knee functional scores or implant position at the 3 and 6 month follow-up.(9).

  13. Computer-Assisted Hip and Knee Arthroplasty. Navigation and Active Robotic Systems

    PubMed Central

    2004-01-01

    Executive Summary Objective The Medical Advisory Secretariat undertook a review of the evidence on the effectiveness and cost-effectiveness of computer assisted hip and knee arthroplasty. The two computer assisted arthroplasty systems that are the topics of this review are (1) navigation and (2) robotic-assisted hip and knee arthroplasty. The Technology Computer-assisted arthroplasty consists of navigation and robotic systems. Surgical navigation is a visualization system that provides positional information about surgical tools or implants relative to a target bone on a computer display. Most of the navigation-assisted arthroplasty devices that are the subject of this review are licensed by Health Canada. Robotic systems are active robots that mill bone according to information from a computer-assisted navigation system. The robotic-assisted arthroplasty devices that are the subject of this review are not currently licensed by Health Canada. Review Strategy The Cochrane and International Network of Agencies for Health Technology Assessment databases did not identify any health technology assessments on navigation or robotic-assisted hip or knee arthroplasty. The MEDLINE and EMBASE databases were searched for articles published between January 1, 1996 and November 30, 2003. This search produced 367 studies, of which 9 met the inclusion criteria. Summary of Findings Navigation-Assisted Arthroplasty Five studies were identified that examined navigation-assisted arthroplasty. A Level 1 evidence study from Germany found a statistically significant difference in alignment and angular deviation between navigation-assisted and free-hand total knee arthroplasty in favour of navigation-assisted surgery. However, the endpoints in this study were short-term. To date, the long-term effects (need for revision, implant longevity, pain, functional performance) are unknown.(1) A Level 2 evidence short-term study found that navigation-assisted total knee arthroplasty was significantly better than a non-navigated procedure for one of five postoperative measured angles.(2) A Level 2 evidence short-term study found no statistically significant difference in the variation of the abduction angle between navigation-assisted and conventional total hip arthroplasty.(3) Level 3 evidence observational studies of navigation-assisted total knee arthroplasty and unicompartmental knee arthroplasty have been conducted. Two studies reported that “the follow-up of the navigated prostheses is currently too short to know if clinical outcome or survival rates are improved. Longer follow-up is required to determine the respective advantages and disadvantages of both techniques.”(4;5) Robotic-Assisted Arthroplasty Four studies were identified that examined robotic-assisted arthroplasty. A Level 1 evidence study revealed that there was no statistically significant difference between functional hip scores at 24 months post implantation between patients who underwent robotic-assisted primary hip arthroplasty and those that were treated with manual implantation.(6) Robotic-assisted arthroplasty had advantages in terms of preoperative planning and the accuracy of the intraoperative procedure.(6) Patients who underwent robotic-assisted hip arthroplasty had a higher dislocation rate and more revisions.(6) Robotic-assisted arthroplasty may prove effective with certain prostheses (e.g., anatomic) because their use may result in less muscle detachment.(6) An observational study (Level 3 evidence) found that the incidence of severe embolic events during hip relocation was lower with robotic arthroplasty than with manual surgery.(7) An observational study (Level 3 evidence) found that there was no significant difference in gait analyses of patients who underwent robotic-assisted total hip arthroplasty using robotic surgery compared to patients who were treated with conventional cementless total hip arthroplasty.(8) An observational study (Level 3 evidence) compared outcomes of total knee arthroplasty between patients undergoing robotic surgery and patients who were historical controls. Brief, qualitative results suggested that there was much broader variation of angles after manual total knee arthroplasty compared to the robotic technique and that there was no difference in knee functional scores or implant position at the 3 and 6 month follow-up.(9) PMID:23074452

  14. Neurosurgical robotic arm drilling navigation system.

    PubMed

    Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai

    2017-09-01

    The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Current state of computer navigation and robotics in unicompartmental and total knee arthroplasty: a systematic review with meta-analysis.

    PubMed

    van der List, Jelle P; Chawla, Harshvardhan; Joskowicz, Leo; Pearle, Andrew D

    2016-11-01

    Recently, there is a growing interest in surgical variables that are intraoperatively controlled by orthopaedic surgeons, including lower leg alignment, component positioning and soft tissues balancing. Since more tight control over these factors is associated with improved outcomes of unicompartmental knee arthroplasty and total knee arthroplasty (TKA), several computer navigation and robotic-assisted systems have been developed. Although mechanical axis accuracy and component positioning have been shown to improve with computer navigation, no superiority in functional outcomes has yet been shown. This could be explained by the fact that many differences exist between the number and type of surgical variables these systems control. Most systems control lower leg alignment and component positioning, while some in addition control soft tissue balancing. Finally, robotic-assisted systems have the additional advantage of improving surgical precision. A systematic search in PubMed, Embase and Cochrane Library resulted in 40 comparative studies and three registries on computer navigation reporting outcomes of 474,197 patients, and 21 basic science and clinical studies on robotic-assisted knee arthroplasty. Twenty-eight of these comparative computer navigation studies reported Knee Society Total scores in 3504 patients. Stratifying by type of surgical variables, no significant differences were noted in outcomes between surgery with computer-navigated TKA controlling for alignment and component positioning versus conventional TKA (p = 0.63). However, significantly better outcomes were noted following computer-navigated TKA that also controlled for soft tissue balancing versus conventional TKA (mean difference 4.84, 95 % Confidence Interval 1.61, 8.07, p = 0.003). A literature review of robotic systems showed that these systems can, similarly to computer navigation, reliably improve lower leg alignment, component positioning and soft tissues balancing. Furthermore, two studies comparing robotic-assisted with computer-navigated surgery reported superiority of robotic-assisted surgery in controlling these factors. Manually controlling all these surgical variables can be difficult for the orthopaedic surgeon. Findings in this study suggest that computer navigation or robotic assistance may help managing these multiple variables and could improve outcomes. Future studies assessing the role of soft tissue balancing in knee arthroplasty and long-term follow-up studies assessing the role of computer-navigated and robotic-assisted knee arthroplasty are needed.

  16. Improvement of the insertion axis for cochlear implantation with a robot-based system.

    PubMed

    Torres, Renato; Kazmitcheff, Guillaume; De Seta, Daniele; Ferrary, Evelyne; Sterkers, Olivier; Nguyen, Yann

    2017-02-01

    It has previously reported that alignment of the insertion axis along the basal turn of the cochlea was depending on surgeon' experience. In this experimental study, we assessed technological assistances, such as navigation or a robot-based system, to improve the insertion axis during cochlear implantation. A preoperative cone beam CT and a mastoidectomy with a posterior tympanotomy were performed on four temporal bones. The optimal insertion axis was defined as the closest axis to the scala tympani centerline avoiding the facial nerve. A neuronavigation system, a robot assistance prototype, and software allowing a semi-automated alignment of the robot were used to align an insertion tool with an optimal insertion axis. Four procedures were performed and repeated three times in each temporal bone: manual, manual navigation-assisted, robot-based navigation-assisted, and robot-based semi-automated. The angle between the optimal and the insertion tool axis was measured in the four procedures. The error was 8.3° ± 2.82° for the manual procedure (n = 24), 8.6° ± 2.83° for the manual navigation-assisted procedure (n = 24), 5.4° ± 3.91° for the robot-based navigation-assisted procedure (n = 24), and 3.4° ± 1.56° for the robot-based semi-automated procedure (n = 12). A higher accuracy was observed with the semi-automated robot-based technique than manual and manual navigation-assisted (p < 0.01). Combination of a navigation system and a manual insertion does not improve the alignment accuracy due to the lack of friendly user interface. On the contrary, a semi-automated robot-based system reduces both the error and the variability of the alignment with a defined optimal axis.

  17. Mobile Robot Designed with Autonomous Navigation System

    NASA Astrophysics Data System (ADS)

    An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin

    2017-10-01

    With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.

  18. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    PubMed Central

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  19. Environment exploration and SLAM experiment research based on ROS

    NASA Astrophysics Data System (ADS)

    Li, Zhize; Zheng, Wei

    2017-11-01

    Robots need to get the information of surrounding environment by means of map learning. SLAM or navigation based on mobile robots is developing rapidly. ROS (Robot Operating System) is widely used in the field of robots because of the convenient code reuse and open source. Numerous excellent algorithms of SLAM or navigation are ported to ROS package. hector_slam is one of them that can set up occupancy grid maps on-line fast with low computation resources requiring. Its characters above make the embedded handheld mapping system possible. Similarly, hector_navigation also does well in the navigation field. It can finish path planning and environment exploration by itself using only an environmental sensor. Combining hector_navigation with hector_slam can realize low cost environment exploration, path planning and slam at the same time

  20. INL Autonomous Navigation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.

  1. Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot

    NASA Astrophysics Data System (ADS)

    Oh, Sung J.; Hall, Ernest L.

    1989-09-01

    The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.

  2. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.

    PubMed

    Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning

    2018-03-16

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  3. The real-world navigator

    NASA Technical Reports Server (NTRS)

    Balabanovic, Marko; Becker, Craig; Morse, Sarah K.; Nourbakhsh, Illah R.

    1994-01-01

    The success of every mobile robot application hinges on the ability to navigate robustly in the real world. The problem of robust navigation is separable from the challenges faced by any particular robot application. We offer the Real-World Navigator as a solution architecture that includes a path planner, a map-based localizer, and a motion control loop that combines reactive avoidance modules with deliberate goal-based motion. Our architecture achieves a high degree of reliability by maintaining and reasoning about an explicit description of positional uncertainty. We provide two implementations of real-world robot systems that incorporate the Real-World Navigator. The Vagabond Project culminated in a robot that successfully navigated a portion of the Stanford University campus. The Scimmer project developed successful entries for the AIAA 1993 Robotics Competition, placing first in one of the two contests entered.

  4. Two modular neuro-fuzzy system for mobile robot navigation

    NASA Astrophysics Data System (ADS)

    Bobyr, M. V.; Titov, V. S.; Kulabukhov, S. A.; Syryamkin, V. I.

    2018-05-01

    The article considers the fuzzy model for navigation of a mobile robot operating in two modes. In the first mode the mobile robot moves along a line. In the second mode, the mobile robot looks for an target in unknown space. Structural and schematic circuit of four-wheels mobile robot are presented in the article. The article describes the movement of a mobile robot based on two modular neuro-fuzzy system. The algorithm of neuro-fuzzy inference used in two modular control system for movement of a mobile robot is given in the article. The experimental model of the mobile robot and the simulation of the neuro-fuzzy algorithm used for its control are presented in the article.

  5. On Navigation Sensor Error Correction

    NASA Astrophysics Data System (ADS)

    Larin, V. B.

    2016-01-01

    The navigation problem for the simplest wheeled robotic vehicle is solved by just measuring kinematical parameters, doing without accelerometers and angular-rate sensors. It is supposed that the steerable-wheel angle sensor has a bias that must be corrected. The navigation parameters are corrected using the GPS. The approach proposed regards the wheeled robot as a system with nonholonomic constraints. The performance of such a navigation system is demonstrated by way of an example

  6. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  7. Robotics in invasive cardiac electrophysiology.

    PubMed

    Shurrab, Mohammed; Schilling, Richard; Gang, Eli; Khan, Ejaz M; Crystal, Eugene

    2014-07-01

    Robotic systems allow for mapping and ablation of different arrhythmia substrates replacing hand maneuvering of intracardiac catheters with machine steering. Currently there are four commercially available robotic systems. Niobe magnetic navigation system (Stereotaxis Inc., St Louis, MO) and Sensei robotic navigation system (Hansen Medical Inc., Mountain View, CA) have an established platform with at least 10 years of clinical studies looking at their efficacy and safety. AMIGO Remote Catheter System (Catheter Robotics, Inc., Mount Olive, NJ) and Catheter Guidance Control and Imaging (Magnetecs, Inglewood, CA) are in the earlier phases of implementations with ongoing feasibility and some limited clinical studies. This review discusses the advantages and limitations related to each existing system and highlights the ideal futuristic robotic system that may include the most promising features of the current ones.

  8. Navigation of robotic system using cricket motes

    NASA Astrophysics Data System (ADS)

    Patil, Yogendra J.; Baine, Nicholas A.; Rattan, Kuldip S.

    2011-06-01

    This paper presents a novel algorithm for self-mapping of the cricket motes that can be used for indoor navigation of autonomous robotic systems. The cricket system is a wireless sensor network that can provide indoor localization service to its user via acoustic ranging techniques. The behavior of the ultrasonic transducer on the cricket mote is studied and the regions where satisfactorily distance measurements can be obtained are recorded. Placing the motes in these regions results fine-grain mapping of the cricket motes. Trilateration is used to obtain a rigid coordinate system, but is insufficient if the network is to be used for navigation. A modified SLAM algorithm is applied to overcome the shortcomings of trilateration. Finally, the self-mapped cricket motes can be used for navigation of autonomous robotic systems in an indoor location.

  9. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    NASA Astrophysics Data System (ADS)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  10. Perception for mobile robot navigation: A survey of the state of the art

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David

    1994-01-01

    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.

  11. Learning Long-Range Vision for an Offroad Robot

    DTIC Science & Technology

    2008-09-01

    robot to perceive and navigate in an unstructured natural world is a difficult task. Without learning, navigation systems are short-range and extremely...unsupervised or weakly supervised learning methods are necessary for training general feature representations for natural scenes. The process was...the world looked dark, and Legos when I was weary. iii ABSTRACT Teaching a robot to perceive and navigate in an unstructured natural world is a

  12. Insect-Inspired Optical-Flow Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven

    2005-01-01

    Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.

  13. Situationally driven local navigation for mobile robots. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Slack, Marc Glenn

    1990-01-01

    For mobile robots to autonomously accommodate dynamically changing navigation tasks in a goal-directed fashion, they must employ navigation plans. Any such plan must provide for the robot's immediate and continuous need for guidance while remaining highly flexible in order to avoid costly computation each time the robot's perception of the world changes. Due to the world's uncertainties, creation and maintenance of navigation plans cannot involve arbitrarily complex processes, as the robot's perception of the world will be in constant flux, requiring modifications to be made quickly if they are to be of any use. This work introduces navigation templates (NaT's) which are building blocks for the construction and maintenance of rough navigation plans which capture the relationship that objects in the world have to the current navigation task. By encoding only the critical relationship between the objects in the world and the navigation task, a NaT-based navigation plan is highly flexible; allowing new constraints to be quickly incorporated into the plan and existing constraints to be updated or deleted from the plan. To satisfy the robot's need for immediate local guidance, the NaT's forming the current navigation plan are passed to a transformation function. The transformation function analyzes the plan with respect to the robot's current location to quickly determine (a few times a second) the locally preferred direction of travel. This dissertation presents NaT's and the transformation function as well as the needed support systems to demonstrate the usefulness of the technique for controlling the actions of a mobile robot operating in an uncertain world.

  14. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  15. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    PubMed Central

    Dou, Lihua; Su, Zhong; Liu, Ning

    2018-01-01

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515

  16. Mamdani Fuzzy System for Indoor Autonomous Mobile Robot

    NASA Astrophysics Data System (ADS)

    Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.

    2011-06-01

    Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.

  17. Conference on Space and Military Applications of Automation and Robotics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.

  18. Tandem robot control system and method for controlling mobile robots in tandem

    DOEpatents

    Hayward, David R.; Buttz, James H.; Shirey, David L.

    2002-01-01

    A control system for controlling mobile robots provides a way to control mobile robots, connected in tandem with coupling devices, to navigate across difficult terrain or in closed spaces. The mobile robots can be controlled cooperatively as a coupled system in linked mode or controlled individually as separate robots.

  19. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    NASA Technical Reports Server (NTRS)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  20. Embedded mobile farm robot for identification of diseased plants

    NASA Astrophysics Data System (ADS)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  1. SLAM algorithm applied to robotics assistance for navigation in unknown environments.

    PubMed

    Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo

    2010-02-17

    The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.

  2. Sandia National Laboratories proof-of-concept robotic security vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, J.J.; Jones, D.P.; Klarer, P.R.

    1989-01-01

    Several years ago Sandia National Laboratories developed a prototype interior robot that could navigate autonomously inside a large complex building to air and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modified andmore » integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities. 2 refs., 3 figs.« less

  3. Enhancing fuzzy robot navigation systems by mimicking human visual perception of natural terrain traversibility

    NASA Technical Reports Server (NTRS)

    Tunstel, E.; Howard, A.; Edwards, D.; Carlson, A.

    2001-01-01

    This paper presents a technique for learning to assess terrain traversability for outdoor mobile robot navigation using human-embedded logic and real-time perception of terrain features extracted from image data.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, N.S.V.; Kareti, S.; Shi, Weimin

    A formal framework for navigating a robot in a geometric terrain by an unknown set of obstacles is considered. Here the terrain model is not a priori known, but the robot is equipped with a sensor system (vision or touch) employed for the purpose of navigation. The focus is restricted to the non-heuristic algorithms which can be theoretically shown to be correct within a given framework of models for the robot, terrain and sensor system. These formulations, although abstract and simplified compared to real-life scenarios, provide foundations for practical systems by highlighting the underlying critical issues. First, the authors considermore » the algorithms that are shown to navigate correctly without much consideration given to the performance parameters such as distance traversed, etc. Second, they consider non-heuristic algorithms that guarantee bounds on the distance traversed or the ratio of the distance traversed to the shortest path length (computed if the terrain model is known). Then they consider the navigation of robots with very limited computational capabilities such as finite automata, etc.« less

  5. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  6. Experiments on robot-assisted navigated drilling and milling of bones for pedicle screw placement.

    PubMed

    Ortmaier, T; Weiss, H; Döbele, S; Schreiber, U

    2006-12-01

    This article presents experimental results for robot-assisted navigated drilling and milling for pedicle screw placement. The preliminary study was carried out in order to gain first insights into positioning accuracies and machining forces during hands-on robotic spine surgery. Additionally, the results formed the basis for the development of a new robot for surgery. A simplified anatomical model is used to derive the accuracy requirements. The experimental set-up consists of a navigation system and an impedance-controlled light-weight robot holding the surgical instrument. The navigation system is used to position the surgical instrument and to compensate for pose errors during machining. Holes are drilled in artificial bone and bovine spine. A quantitative comparison of the drill-hole diameters was achieved using a computer. The interaction forces and pose errors are discussed with respect to the chosen machining technology and control parameters. Within the technological boundaries of the experimental set-up, it is shown that the accuracy requirements can be met and that milling is superior to drilling. It is expected that robot assisted navigated surgery helps to improve the reliability of surgical procedures. Further experiments are necessary to take the whole workflow into account. Copyright 2006 John Wiley & Sons, Ltd.

  7. Development of a force-reflecting robotic platform for cardiac catheter navigation.

    PubMed

    Park, Jun Woo; Choi, Jaesoon; Pak, Hui-Nam; Song, Seung Joon; Lee, Jung Chan; Park, Yongdoo; Shin, Seung Min; Sun, Kyung

    2010-11-01

    Electrophysiological catheters are used for both diagnostics and clinical intervention. To facilitate more accurate and precise catheter navigation, robotic cardiac catheter navigation systems have been developed and commercialized. The authors have developed a novel force-reflecting robotic catheter navigation system. The system is a network-based master-slave configuration having a 3-degree of freedom robotic manipulator for operation with a conventional cardiac ablation catheter. The master manipulator implements a haptic user interface device with force feedback using a force or torque signal either measured with a sensor or estimated from the motor current signal in the slave manipulator. The slave manipulator is a robotic motion control platform on which the cardiac ablation catheter is mounted. The catheter motions-forward and backward movements, rolling, and catheter tip bending-are controlled by electromechanical actuators located in the slave manipulator. The control software runs on a real-time operating system-based workstation and implements the master/slave motion synchronization control of the robot system. The master/slave motion synchronization response was assessed with step, sinusoidal, and arbitrarily varying motion commands, and showed satisfactory performance with insignificant steady-state motion error. The current system successfully implemented the motion control function and will undergo safety and performance evaluation by means of animal experiments. Further studies on the force feedback control algorithm and on an active motion catheter with an embedded actuation mechanism are underway. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  8. Preliminary study on magnetic tracking-based planar shape sensing and navigation for flexible surgical robots in transoral surgery: methods and phantom experiments.

    PubMed

    Song, Shuang; Zhang, Changchun; Liu, Li; Meng, Max Q-H

    2018-02-01

    Flexible surgical robot can work in confined and complex environments, which makes it a good option for minimally invasive surgery. In order to utilize flexible manipulators in complicated and constrained surgical environments, it is of great significance to monitor the position and shape of the curvilinear manipulator in real time during the procedures. In this paper, we propose a magnetic tracking-based planar shape sensing and navigation system for flexible surgical robots in the transoral surgery. The system can provide the real-time tip position and shape information of the robot during the operation. We use wire-driven flexible robot to serve as the manipulator. It has three degrees of freedom. A permanent magnet is mounted at the distal end of the robot. Its magnetic field can be sensed with a magnetic sensor array. Therefore, position and orientation of the tip can be estimated utilizing a tracking method. A shape sensing algorithm is then carried out to estimate the real-time shape based on the tip pose. With the tip pose and shape display in the 3D reconstructed CT model, navigation can be achieved. Using the proposed system, we carried out planar navigation experiments on a skull phantom to touch three different target positions under the navigation of the skull display interface. During the experiments, the real-time shape has been well monitored and distance errors between the robot tip and the targets in the skull have been recorded. The mean navigation error is [Formula: see text] mm, while the maximum error is 3.2 mm. The proposed method provides the advantages that no sensors are needed to mount on the robot and no line-of-sight problem. Experimental results verified the feasibility of the proposed method.

  9. Volunteers Oriented Interface Design for the Remote Navigation of Rescue Robots at Large-Scale Disaster Sites

    NASA Astrophysics Data System (ADS)

    Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi

    This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.

  10. A biologically inspired meta-control navigation system for the Psikharpax rat robot.

    PubMed

    Caluwaerts, K; Staffa, M; N'Guyen, S; Grand, C; Dollé, L; Favre-Félix, A; Girard, B; Khamassi, M

    2012-06-01

    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e.g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics.

  11. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    PubMed Central

    2010-01-01

    Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735

  12. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot.

    PubMed

    Duan, Xingguang; Gao, Liang; Wang, Yonggui; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, "kinematics + optics" hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.

  13. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot

    PubMed Central

    Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948

  14. Dynamic multisensor fusion for mobile robot navigation in an indoor environment

    NASA Astrophysics Data System (ADS)

    Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.

    2001-10-01

    In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.

  15. Indoor Navigation using Direction Sensor and Beacons

    NASA Technical Reports Server (NTRS)

    Shields, Joel; Jeganathan, Muthu

    2004-01-01

    A system for indoor navigation of a mobile robot includes (1) modulated infrared beacons at known positions on the walls and ceiling of a room and (2) a cameralike sensor, comprising a wide-angle lens with a position-sensitive photodetector at the focal plane, mounted in a known position and orientation on the robot. The system also includes a computer running special-purpose software that processes the sensor readings to obtain the position and orientation of the robot in all six degrees of freedom in a coordinate system embedded in the room.

  16. Experiments in autonomous robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamel, W.R.

    1987-01-01

    The Center for Engineering Systems Advanced Research (CESAR) is performing basic research in autonomous robotics for energy-related applications in hazardous environments. The CESAR research agenda includes a strong experimental component to assure practical evaluation of new concepts and theories. An evolutionary sequence of mobile research robots has been planned to support research in robot navigation, world sensing, and object manipulation. A number of experiments have been performed in studying robot navigation and path planning with planar sonar sensing. Future experiments will address more complex tasks involving three-dimensional sensing, dexterous manipulation, and human-scale operations.

  17. Magnetic resonance imaging compatible remote catheter navigation system with 3 degrees of freedom.

    PubMed

    Tavallaei, M A; Lavdas, M K; Gelman, D; Drangova, M

    2016-08-01

    To facilitate MRI-guided catheterization procedures, we present an MRI-compatible remote catheter navigation system that allows remote navigation of steerable catheters with 3 degrees of freedom. The system consists of a user interface (master), a robot (slave), and an ultrasonic motor control servomechanism. The interventionalist applies conventional motions (axial, radial and plunger manipulations) on an input catheter in the master unit; this user input is measured and used by the servomechanism to control a compact catheter manipulating robot, such that it replicates the interventionalist's input motion on the patient catheter. The performance of the system was evaluated in terms of MRI compatibility (SNR and artifact), feasibility of remote navigation under real-time MRI guidance, and motion replication accuracy. Real-time MRI experiments demonstrated that catheter was successfully navigated remotely to desired target references in all 3 degrees of freedom. The system had an absolute value error of [Formula: see text]1 mm in axial catheter motion replication over 30 mm of travel and [Formula: see text] for radial catheter motion replication over [Formula: see text]. The worst case SNR drop was observed to be [Formula: see text]3 %; the robot did not introduce any artifacts in the MR images. An MRI-compatible compact remote catheter navigation system has been developed that allows remote navigation of steerable catheters with 3 degrees of freedom. The proposed system allows for safe and accurate remote catheter navigation, within conventional closed-bore scanners, without degrading MR image quality.

  18. Solar Thermal Utility-Scale Joint Venture Program (USJVP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANCINI,THOMAS R.

    2001-04-01

    Several years ago Sandia National Laboratories developed a prototype interior robot [1] that could navigate autonomously inside a large complex building to aid and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modifiedmore » and integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities.« less

  19. Robotic navigation and ablation.

    PubMed

    Malcolme-Lawes, L; Kanagaratnam, P

    2010-12-01

    Robotic technologies have been developed to allow optimal catheter stability and reproducible catheter movements with the aim of achieving contiguous and transmural lesion delivery. Two systems for remote navigation of catheters within the heart have been developed; the first is based on a magnetic navigation system (MNS) Niobe, Stereotaxis, Saint-Louis, Missouri, USA, the second is based on a steerable sheath system (Sensei, Hansen Medical, Mountain View, CA, USA). Both robotic and magnetic navigation systems have proven to be feasible for performing ablation of both simple and complex arrhythmias, particularly atrial fibrillation. Studies to date have shown similar success rates for AF ablation compared to that of manual ablation, with many groups finding a reduction in fluoroscopy times. However, the early learning curve of cases demonstrated longer procedure times, mainly due to additional setup times. With centres performing increasing numbers of robotic ablations and the introduction of a pressure monitoring system, lower power settings and instinctive driving software, complication rates are reducing, and fluoroscopy times have been lower than manual ablation in many studies. As the demand for catheter ablation for arrhythmias such as atrial fibrillation increases and the number of centres performing these ablations increases, the demand for systems which reduce the hand skill requirement and improve the comfort of the operator will also increase.

  20. DEMONSTRATION OF AUTONOMOUS AIR MONITORING THROUGH ROBOTICS

    EPA Science Inventory

    This project included modifying an existing teleoperated robot to include autonomous navigation, large object avoidance, and air monitoring and demonstrating that prototype robot system in indoor and outdoor environments. An existing teleoperated "Surveyor" robot developed by ARD...

  1. Numerical evaluation of mobile robot navigation in static indoor environment via EGAOR Iteration

    NASA Astrophysics Data System (ADS)

    Dahalan, A. A.; Saudi, A.; Sulaiman, J.; Din, W. R. W.

    2017-09-01

    One of the key issues in mobile robot navigation is the ability for the robot to move from an arbitrary start location to a specified goal location without colliding with any obstacles while traveling, also known as mobile robot path planning problem. In this paper, however, we examined the performance of a robust searching algorithm that relies on the use of harmonic potentials of the environment to generate smooth and safe path for mobile robot navigation in a static known indoor environment. The harmonic potentials will be discretized by using Laplacian’s operator to form a system of algebraic approximation equations. This algebraic linear system will be computed via 4-Point Explicit Group Accelerated Over-Relaxation (4-EGAOR) iterative method for rapid computation. The performance of the proposed algorithm will then be compared and analyzed against the existing algorithms in terms of number of iterations and execution time. The result shows that the proposed algorithm performed better than the existing methods.

  2. Electromagnetic navigational bronchoscopy and robotic-assisted thoracic surgery.

    PubMed

    Christie, Sara

    2014-06-01

    With the use of electromagnetic navigational bronchoscopy and robotics, lung lesions can be diagnosed and resected during one surgical procedure. Global positioning system technology allows surgeons to identify and mark a thoracic tumor, and then robotics technology allows them to perform minimally invasive resection and cancer staging procedures. Nurses on the perioperative robotics team must consider the logistics of providing safe and competent care when performing combined procedures during one surgical encounter. Instrumentation, OR organization and room setup, and patient positioning are important factors to consider to complete the procedure systematically and efficiently. This revolutionary concept of combining navigational bronchoscopy with robotics requires a team of dedicated nurses to facilitate the sequence of events essential for providing optimal patient outcomes in highly advanced surgical procedures. Copyright © 2014 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  3. An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry

    NASA Astrophysics Data System (ADS)

    Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro

    This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.

  4. Determining Locations by Use of Networks of Passive Beacons

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Gray, Andrew; Jennings, Esther

    2009-01-01

    Networks of passive radio beacons spanning moderate-sized terrain areas have been proposed to aid navigation of small robotic aircraft that would be used to explore Saturn s moon Titan. Such networks could also be used on Earth to aid navigation of robotic aircraft, land vehicles, or vessels engaged in exploration or reconnaissance in situations or locations (e.g., underwater locations) in which Global Positioning System (GPS) signals are unreliable or unavailable. Prior to use, it would be necessary to pre-position the beacons at known locations that would be determined by use of one or more precise independent global navigation system(s). Thereafter, while navigating over the area spanned by a given network of passive beacons, an exploratory robot would use the beacons to determine its position precisely relative to the known beacon positions (see figure). If it were necessary for the robot to explore multiple, separated terrain areas spanned by different networks of beacons, the robot could use a long-haul, relatively coarse global navigation system for the lower-precision position determination needed during transit between such areas. The proposed method of precise determination of position of an exploratory robot relative to the positions of passive radio beacons is based partly on the principles of radar and partly on the principles of radio-frequency identification (RFID) tags. The robot would transmit radar-like signals that would be modified and reflected by the passive beacons. The distance to each beacon would be determined from the roundtrip propagation time and/or round-trip phase shift of the signal returning from that beacon. Signals returned from different beacons could be distinguished by means of their RFID characteristics. Alternatively or in addition, the antenna of each beacon could be designed to radiate in a unique pattern that could be identified by the navigation system. Also, alternatively or in addition, sets of identical beacons could be deployed in unique configurations such that the navigation system could identify their unique combinations of radio-frequency reflections as an alternative to leveraging the uniqueness of the RFID tags. The degree of dimensional accuracy would depend not only on the locations of the beacons but also on the number of beacon signals received, the number of samples of each signal, the motion of the robot, and the time intervals between samples. At one extreme, a single sample of the return signal from a single beacon could be used to determine the distance from that beacon and hence to determine that the robot is located somewhere on a sphere, the radius of which equals that distance and the center of which lies at the beacon. In a less extreme example, the three-dimensional position of the robot could be determined with fair precision from a single sample of the signal from each of three beacons. In intermediate cases, position estimates could be refined and/or position ambiguities could be resolved by use of supplementary readings of an altimeter and other instruments aboard the robot.

  5. BatSLAM: Simultaneous localization and mapping using biomimetic sonar.

    PubMed

    Steckel, Jan; Peremans, Herbert

    2013-01-01

    We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building.

  6. BatSLAM: Simultaneous Localization and Mapping Using Biomimetic Sonar

    PubMed Central

    Steckel, Jan; Peremans, Herbert

    2013-01-01

    We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building. PMID:23365647

  7. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    PubMed

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  8. Navigation system for autonomous mapper robots

    NASA Astrophysics Data System (ADS)

    Halbach, Marc; Baudoin, Yvan

    1993-05-01

    This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.

  9. The Evolution of Computer-Assisted Total Hip Arthroplasty and Relevant Applications.

    PubMed

    Chang, Jun-Dong; Kim, In-Sung; Bhardwaj, Atul M; Badami, Ramachandra N

    2017-03-01

    In total hip arthroplasty (THA), the accurate positioning of implants is the key to achieve a good clinical outcome. Computer-assisted orthopaedic surgery (CAOS) has been developed for more accurate positioning of implants during the THA. There are passive, semi-active, and active systems in CAOS for THA. Navigation is a passive system that only provides information and guidance to the surgeon. There are 3 types of navigation: imageless navigation, computed tomography (CT)-based navigation, and fluoroscopy-based navigation. In imageless navigation system, a new method of registration without the need to register the anterior pelvic plane was introduced. CT-based navigation can be efficiently used for pelvic plane reference, the functional pelvic plane in supine which adjusts anterior pelvic plane sagittal tilt for targeting the cup orientation. Robot-assisted system can be either active or semi-active. The active robotic system performs the preparation for implant positioning as programmed preoperatively. It has been used for only femoral implant cavity preparation. Recently, program for cup positioning was additionally developed. Alternatively, for ease of surgeon acceptance, semi-active robot systems are developed. It was initially applied only for cup positioning. However, with the development of enhanced femoral workflows, this system can now be used to position both cup and stem. Though there have been substantial advancements in computer-assisted THA, its use can still be controversial at present due to the steep learning curve, intraoperative technical issues, high cost and etc. However, in the future, CAOS will certainly enable the surgeon to operate more accurately and lead to improved outcomes in THA as the technology continues to evolve rapidly.

  10. An egocentric vision based assistive co-robot.

    PubMed

    Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang

    2013-06-01

    We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.

  11. [Principles of MR-guided interventions, surgery, navigation, and robotics].

    PubMed

    Melzer, A

    2010-08-01

    The application of magnetic resonance imaging (MRI) as an imaging technique in interventional and surgical techniques provides a new dimension of soft tissue-oriented precise procedures without exposure to ionizing radiation and nephrotoxic allergenic, iodine-containing contrast agents. The technical capabilities of MRI in combination with interventional devices and systems, navigation, and robotics are discussed.

  12. Non-destructive inspection in industrial equipment using robotic mobile manipulation

    NASA Astrophysics Data System (ADS)

    Maurtua, Iñaki; Susperregi, Loreto; Ansuategui, Ander; Fernández, Ane; Ibarguren, Aitor; Molina, Jorge; Tubio, Carlos; Villasante, Cristobal; Felsch, Torsten; Pérez, Carmen; Rodriguez, Jorge R.; Ghrissi, Meftah

    2016-05-01

    MAINBOT project has developed service robots based applications to autonomously execute inspection tasks in extensive industrial plants in equipment that is arranged horizontally (using ground robots) or vertically (climbing robots). The industrial objective has been to provide a means to help measuring several physical parameters in multiple points by autonomous robots, able to navigate and climb structures, handling non-destructive testing sensors. MAINBOT has validated the solutions in two solar thermal plants (cylindrical-parabolic collectors and central tower), that are very demanding from mobile manipulation point of view mainly due to the extension (e.g. a thermal solar plant of 50Mw, with 400 hectares, 400.000 mirrors, 180 km of absorber tubes, 140m height tower), the variability of conditions (outdoor, day-night), safety requirements, etc. Once the technology was validated in simulation, the system was deployed in real setups and different validation tests carried out. In this paper two of the achievements related with the ground mobile inspection system are presented: (1) Autonomous navigation localization and planning algorithms to manage navigation in huge extensions and (2) Non-Destructive Inspection operations: thermography based detection algorithms to provide automatic inspection abilities to the robots.

  13. Modular Countermine Payload for Small Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman Herman; Doug Few; Roelof Versteeg

    2010-04-01

    Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processormore » that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multi-mission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.« less

  14. Modular countermine payload for small robots

    NASA Astrophysics Data System (ADS)

    Herman, Herman; Few, Doug; Versteeg, Roelof; Valois, Jean-Sebastien; McMahill, Jeff; Licitra, Michael; Henciak, Edward

    2010-04-01

    Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processor that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multimission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.

  15. An Analysis of Navigation Algorithms for Smartphones Using J2ME

    NASA Astrophysics Data System (ADS)

    Santos, André C.; Tarrataca, Luís; Cardoso, João M. P.

    Embedded systems are considered one of the most potential areas for future innovations. Two embedded fields that will most certainly take a primary role in future innovations are mobile robotics and mobile computing. Mobile robots and smartphones are growing in number and functionalities, becoming a presence in our daily life. In this paper, we study the current feasibility of a smartphone to execute navigation algorithms. As a test case, we use a smartphone to control an autonomous mobile robot. We tested three navigation problems: Mapping, Localization and Path Planning. For each of these problems, an algorithm has been chosen, developed in J2ME, and tested on the field. Results show the current mobile Java capacity for executing computationally demanding algorithms and reveal the real possibility of using smartphones for autonomous navigation.

  16. Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri

    2002-01-01

    The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.

  17. Navigation of a care and welfare robot

    NASA Astrophysics Data System (ADS)

    Yukawa, Toshihiro; Hosoya, Osamu; Saito, Naoki; Okano, Hideharu

    2005-12-01

    In this paper, we propose the development of a robot that can perform nursing tasks in a hospital. In a narrow environment such as a sickroom or a hallway, the robot must be able to move freely in arbitrary directions. Therefore, the robot needs to have high controllability and the capability to make precise movements. Our robot can recognize a line by using cameras, and can be controlled in the reference directions by means of comparison with original cell map information; furthermore, it moves safely on the basis of an original center-line established permanently in the building. Correspondence between the robot and a centralized control center enables the robot's autonomous movement in the hospital. Through a navigation system using cell map information, the robot is able to perform nursing tasks smoothly by changing the camera angle.

  18. A spatial registration method for navigation system combining O-arm with spinal surgery robot

    NASA Astrophysics Data System (ADS)

    Bai, H.; Song, G. L.; Zhao, Y. W.; Liu, X. Z.; Jiang, Y. X.

    2018-05-01

    The minimally invasive surgery in spinal surgery has become increasingly popular in recent years as it reduces the chances of complications during post-operation. However, the procedure of spinal surgery is complicated and the surgical vision of minimally invasive surgery is limited. In order to increase the quality of percutaneous pedicle screw placement, the O-arm that is a mobile intraoperative imaging system is used to assist surgery. The robot navigation system combined with O-arm is also increasing, with the extensive use of O-arm. One of the major problems in the surgical navigation system is to associate the patient space with the intra-operation image space. This study proposes a spatial registration method of spinal surgical robot navigation system, which uses the O-arm to scan a calibration phantom with metal calibration spheres. First, the metal artifacts were reduced in the CT slices and then the circles in the images based on the moments invariant could be identified. Further, the position of the calibration sphere in the image space was obtained. Moreover, the registration matrix is obtained based on the ICP algorithm. Finally, the position error is calculated to verify the feasibility and accuracy of the registration method.

  19. Reactive, Safe Navigation for Lunar and Planetary Robots

    NASA Technical Reports Server (NTRS)

    Utz, Hans; Ruland, Thomas

    2008-01-01

    When humans return to the moon, Astronauts will be accompanied by robotic helpers. Enabling robots to safely operate near astronauts on the lunar surface has the potential to significantly improve the efficiency of crew surface operations. Safely operating robots in close proximity to astronauts on the lunar surface requires reactive obstacle avoidance capabilities not available on existing planetary robots. In this paper we present work on safe, reactive navigation using a stereo based high-speed terrain analysis and obstacle avoidance system. Advances in the design of the algorithms allow it to run terrain analysis and obstacle avoidance algorithms at full frame rate (30Hz) on off the shelf hardware. The results of this analysis are fed into a fast, reactive path selection module, enforcing the safety of the chosen actions. The key components of the system are discussed and test results are presented.

  20. Autonomous navigation system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  1. A Sustained Proximity Network for Multi-Mission Lunar Exploration

    NASA Technical Reports Server (NTRS)

    Soloff, Jason A.; Noreen, Gary; Deutsch, Leslie; Israel, David

    2005-01-01

    Tbe Vision for Space Exploration calls for an aggressive sequence of robotic missions beginning in 2008 to prepare for a human return to the Moon by 2020, with the goal of establishing a sustained human presence beyond low Earth orbit. A key enabler of exploration is reliable, available communication and navigation capabilities to support both human and robotic missions. An adaptable, sustainable communication and navigation architecture has been developed by Goddard Space Flight Center and the Jet Propulsion Laboratory to support human and robotic lunar exploration through the next two decades. A key component of the architecture is scalable deployment, with the infrastructure evolving as needs emerge, allowing NASA and its partner agencies to deploy an interoperable communication and navigation system in an evolutionary way, enabling cost effective, highly adaptable systems throughout the lunar exploration program.

  2. High-frequency imaging radar for robotic navigation and situational awareness

    NASA Astrophysics Data System (ADS)

    Thomas, David J.; Luo, Changan; Knox, Robert

    2011-05-01

    With increasingly available high frequency radar components, the practicality of imaging radar for mobile robotic applications is now practical. Navigation, ODOA, situational awareness and safety applications can be supported in small light weight packaging. Radar has the additional advantage of being able sense through aerosols, smoke and dust that can be difficult for many optical systems. The ability to directly measure the range rate of an object is also an advantage in radar applications. This paper will explore the applicability of high frequency imaging radar for mobile robotics and examine a W-band 360 degree imaging radar prototype. Indoor and outdoor performance data will be analyzed and evaluated for applicability to navigation and situational awareness.

  3. IntelliTable: Inclusively-Designed Furniture with Robotic Capabilities.

    PubMed

    Prescott, Tony J; Conran, Sebastian; Mitchinson, Ben; Cudd, Peter

    2017-01-01

    IntelliTable is a new proof-of-principle assistive technology system with robotic capabilities in the form of an elegant universal cantilever table able to move around by itself, or under user control. We describe the design and current capabilities of the table and the human-centered design methodology used in its development and initial evaluation. The IntelliTable study has delivered robotic platform programmed by a smartphone that can navigate around a typical home or care environment, avoiding obstacles, and positioning itself at the user's command. It can also be configured to navigate itself to pre-ordained places positions within an environment using ceiling tracking, responsive optical guidance and object-based sonar navigation.

  4. Behavioral Mapless Navigation Using Rings

    NASA Technical Reports Server (NTRS)

    Monroe, Randall P.; Miller, Samuel A.; Bradley, Arthur T.

    2012-01-01

    This paper presents work on the development and implementation of a novel approach to robotic navigation. In this system, map-building and localization for obstacle avoidance are discarded in favor of moment-by-moment behavioral processing of the sonar sensor data. To accomplish this, we developed a network of behaviors that communicate through the passing of rings, data structures that are similar in form to the sonar data itself and express the decisions of each behavior. Through the use of these rings, behaviors can moderate each other, conflicting impulses can be mediated, and designers can easily connect modules to create complex emergent navigational techniques. We discuss the development of a number of these modules and their successful use as a navigation system in the Trinity omnidirectional robot.

  5. Perception system and functions for autonomous navigation in a natural environment

    NASA Technical Reports Server (NTRS)

    Chatila, Raja; Devy, Michel; Lacroix, Simon; Herrb, Matthieu

    1994-01-01

    This paper presents the approach, algorithms, and processes we developed for the perception system of a cross-country autonomous robot. After a presentation of the tele-programming context we favor for intervention robots, we introduce an adaptive navigation approach, well suited for the characteristics of complex natural environments. This approach lead us to develop a heterogeneous perception system that manages several different terrain representatives. The perception functionalities required during navigation are listed, along with the corresponding representations we consider. The main perception processes we developed are presented. They are integrated within an on-board control architecture we developed. First results of an ambitious experiment currently underway at LAAS are then presented.

  6. The Evolution of Computer-Assisted Total Hip Arthroplasty and Relevant Applications

    PubMed Central

    Kim, In-Sung; Bhardwaj, Atul M.; Badami, Ramachandra N.

    2017-01-01

    In total hip arthroplasty (THA), the accurate positioning of implants is the key to achieve a good clinical outcome. Computer-assisted orthopaedic surgery (CAOS) has been developed for more accurate positioning of implants during the THA. There are passive, semi-active, and active systems in CAOS for THA. Navigation is a passive system that only provides information and guidance to the surgeon. There are 3 types of navigation: imageless navigation, computed tomography (CT)-based navigation, and fluoroscopy-based navigation. In imageless navigation system, a new method of registration without the need to register the anterior pelvic plane was introduced. CT-based navigation can be efficiently used for pelvic plane reference, the functional pelvic plane in supine which adjusts anterior pelvic plane sagittal tilt for targeting the cup orientation. Robot-assisted system can be either active or semi-active. The active robotic system performs the preparation for implant positioning as programmed preoperatively. It has been used for only femoral implant cavity preparation. Recently, program for cup positioning was additionally developed. Alternatively, for ease of surgeon acceptance, semi-active robot systems are developed. It was initially applied only for cup positioning. However, with the development of enhanced femoral workflows, this system can now be used to position both cup and stem. Though there have been substantial advancements in computer-assisted THA, its use can still be controversial at present due to the steep learning curve, intraoperative technical issues, high cost and etc. However, in the future, CAOS will certainly enable the surgeon to operate more accurately and lead to improved outcomes in THA as the technology continues to evolve rapidly. PMID:28316957

  7. Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development.

    PubMed

    Bostelman, Roger; Hong, Tsai; Legowik, Steven

    2016-01-01

    Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems.

  8. Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development

    PubMed Central

    Bostelman, Roger; Hong, Tsai; Legowik, Steven

    2017-01-01

    Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems. PMID:28690359

  9. Design and implementation of magnetically maneuverable capsule endoscope system with direction reference for image navigation.

    PubMed

    Sun, Zhen-Jun; Ye, Bo; Sun, Yi; Zhang, Hong-Hai; Liu, Sheng

    2014-07-01

    This article describes a novel magnetically maneuverable capsule endoscope system with direction reference for image navigation. This direction reference was employed by utilizing a specific magnet configuration between a pair of external permanent magnets and a magnetic shell coated on the external capsule endoscope surface. A pair of customized Cartesian robots, each with only 4 degrees of freedom, was built to hold the external permanent magnets as their end-effectors. These robots, together with their external permanent magnets, were placed on two opposite sides of a "patient bed." Because of the optimized configuration based on magnetic analysis between the external permanent magnets and the magnetic shell, a simplified control strategy was proposed, and only two parameters, yaw step angle and moving step, were necessary for the employed robotic system. Step-by-step experiments demonstrated that the proposed system is capable of magnetically maneuvering the capsule endoscope while providing direction reference for image navigation. © IMechE 2014.

  10. Real-time adaptive off-road vehicle navigation and terrain classification

    NASA Astrophysics Data System (ADS)

    Muller, Urs A.; Jackel, Lawrence D.; LeCun, Yann; Flepp, Beat

    2013-05-01

    We are developing a complete, self-contained autonomous navigation system for mobile robots that learns quickly, uses commodity components, and has the added benefit of emitting no radiation signature. It builds on the au­tonomous navigation technology developed by Net-Scale and New York University during the Defense Advanced Research Projects Agency (DARPA) Learning Applied to Ground Robots (LAGR) program and takes advantage of recent scientific advancements achieved during the DARPA Deep Learning program. In this paper we will present our approach and algorithms, show results from our vision system, discuss lessons learned from the past, and present our plans for further advancing vehicle autonomy.

  11. Bio-robots automatic navigation with electrical reward stimulation.

    PubMed

    Sun, Chao; Zhang, Xinlu; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2012-01-01

    Bio-robots that controlled by outer stimulation through brain computer interface (BCI) suffer from the dependence on realtime guidance of human operators. Current automatic navigation methods for bio-robots focus on the controlling rules to force animals to obey man-made commands, with animals' intelligence ignored. This paper proposes a new method to realize the automatic navigation for bio-robots with electrical micro-stimulation as real-time rewards. Due to the reward-seeking instinct and trial-and-error capability, bio-robot can be steered to keep walking along the right route with rewards and correct its direction spontaneously when rewards are deprived. In navigation experiments, rat-robots learn the controlling methods in short time. The results show that our method simplifies the controlling logic and realizes the automatic navigation for rat-robots successfully. Our work might have significant implication for the further development of bio-robots with hybrid intelligence.

  12. Robotic digital subtraction angiography systems within the hybrid operating room.

    PubMed

    Murayama, Yuichi; Irie, Koreaki; Saguchi, Takayuki; Ishibashi, Toshihiro; Ebara, Masaki; Nagashima, Hiroyasu; Isoshima, Akira; Arakawa, Hideki; Takao, Hiroyuki; Ohashi, Hiroki; Joki, Tatsuhiro; Kato, Masataka; Tani, Satoshi; Ikeuchi, Satoshi; Abe, Toshiaki

    2011-05-01

    Fully equipped high-end digital subtraction angiography (DSA) within the operating room (OR) environment has emerged as a new trend in the fields of neurosurgery and vascular surgery. To describe initial clinical experience with a robotic DSA system in the hybrid OR. A newly designed robotic DSA system (Artis zeego; Siemens AG, Forchheim, Germany) was installed in the hybrid OR. The system consists of a multiaxis robotic C arm and surgical OR table. In addition to conventional neuroendovascular procedures, the system was used as an intraoperative imaging tool for various neurosurgical procedures such as aneurysm clipping and spine instrumentation. Five hundred one neurosurgical procedures were successfully conducted in the hybrid OR with the robotic DSA. During surgical procedures such as aneurysm clipping and arteriovenous fistula treatment, intraoperative 2-/3-dimensional angiography and C-arm-based computed tomographic images (DynaCT) were easily performed without moving the OR table. Newly developed virtual navigation software (syngo iGuide; Siemens AG) can be used in frameless navigation and in access to deep-seated intracranial lesions or needle placement. This newly developed robotic DSA system provides safe and precise treatment in the fields of endovascular treatment and neurosurgery.

  13. Navigation strategies for multiple autonomous mobile robots moving in formation

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1991-01-01

    The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.

  14. 30 Years of Neurosurgical Robots: Review and Trends for Manipulators and Associated Navigational Systems.

    PubMed

    Smith, James Andrew; Jivraj, Jamil; Wong, Ronnie; Yang, Victor

    2016-04-01

    This review provides an examination of contemporary neurosurgical robots and the developments that led to them. Improvements in localization, microsurgery and minimally invasive surgery have made robotic neurosurgery viable, as seen by the success of platforms such as the CyberKnife and neuromate. Neurosurgical robots can now perform specific surgical tasks such as skull-base drilling and craniotomies, as well as pedicle screw and cochlear electrode insertions. Growth trends in neurosurgical robotics are likely to continue but may be tempered by concerns over recent surgical robot recalls, commercially-driven surgeon training, and studies that show operational costs for surgical robotic procedures are often higher than traditional surgical methods. We point out that addressing performance issues related to navigation-related registration is an active area of research and will aid in improving overall robot neurosurgery performance and associated costs.

  15. Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System

    NASA Astrophysics Data System (ADS)

    Oh, Sung J.; Hall, Ernest L.

    1987-01-01

    Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.

  16. The JPL Serpentine Robot: A 12 DOF System for Inspection

    NASA Technical Reports Server (NTRS)

    Paljug, E.; Ohm, T.; Hayati, S.

    1995-01-01

    The Serpentine Robot is a prototype hyper-redundant (snake-like) manipulator system developed at the Jet Propulsion Laboratory. It is designed to navigate and perform tasks in obstructed and constrained environments in which conventional 6 DOF manipulators cannot function. Described are the robot mechanical design, a joint assembly low level inverse kinematic algorithm, control development, and applications.

  17. Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.

    1999-08-01

    A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.

  18. Hierarchical HMM based learning of navigation primitives for cooperative robotic endovascular catheterization.

    PubMed

    Rafii-Tari, Hedyeh; Liu, Jindong; Payne, Christopher J; Bicknell, Colin; Yang, Guang-Zhong

    2014-01-01

    Despite increased use of remote-controlled steerable catheter navigation systems for endovascular intervention, most current designs are based on master configurations which tend to alter natural operator tool interactions. This introduces problems to both ergonomics and shared human-robot control. This paper proposes a novel cooperative robotic catheterization system based on learning-from-demonstration. By encoding the higher-level structure of a catheterization task as a sequence of primitive motions, we demonstrate how to achieve prospective learning for complex tasks whilst incorporating subject-specific variations. A hierarchical Hidden Markov Model is used to model each movement primitive as well as their sequential relationship. This model is applied to generation of motion sequences, recognition of operator input, and prediction of future movements for the robot. The framework is validated by comparing catheter tip motions against the manual approach, showing significant improvements in the quality of catheterization. The results motivate the design of collaborative robotic systems that are intuitive to use, while reducing the cognitive workload of the operator.

  19. Robotic air vehicle. Blending artificial intelligence with conventional software

    NASA Technical Reports Server (NTRS)

    Mcnulty, Christa; Graham, Joyce; Roewer, Paul

    1987-01-01

    The Robotic Air Vehicle (RAV) system is described. The program's objectives were to design, implement, and demonstrate cooperating expert systems for piloting robotic air vehicles. The development of this system merges conventional programming used in passive navigation with Artificial Intelligence techniques such as voice recognition, spatial reasoning, and expert systems. The individual components of the RAV system are discussed as well as their interactions with each other and how they operate as a system.

  20. Intelligent navigation and accurate positioning of an assist robot in indoor environments

    NASA Astrophysics Data System (ADS)

    Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke

    2017-12-01

    Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.

  1. Soldier-Based Assessment of a Dual-Row Tactor Display during Simultaneous Navigational and Robot-Monitoring Tasks

    DTIC Science & Technology

    2015-08-01

    Navigational and Robot -Monitoring Tasks by Gina Pomranky-Hartnett, Linda R Elliott, Bruce JP Mortimer, Greg R Mort, Rodger A Pettitt, and Gary A...Tactor Display during Simultaneous Navigational and Robot -Monitoring Tasks by Gina Pomranky-Hartnett, Linda R Elliott, and Rodger A Pettitt...2014–31 March 2015 4. TITLE AND SUBTITLE Soldier-Based Assessment of a Dual-Row Tactor Display during Simultaneous Navigational and Robot -Monitoring

  2. Ultra wide-band localization and SLAM: a comparative study for mobile robot navigation.

    PubMed

    Segura, Marcelo J; Auat Cheein, Fernando A; Toibero, Juan M; Mut, Vicente; Carelli, Ricardo

    2011-01-01

    In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work.

  3. Technological advances in robotic-assisted laparoscopic surgery.

    PubMed

    Tan, Gerald Y; Goel, Raj K; Kaouk, Jihad H; Tewari, Ashutosh K

    2009-05-01

    In this article, the authors describe the evolution of urologic robotic systems and the current state-of-the-art features and existing limitations of the da Vinci S HD System (Intuitive Surgical, Inc.). They then review promising innovations in scaling down the footprint of robotic platforms, the early experience with mobile miniaturized in vivo robots, advances in endoscopic navigation systems using augmented reality technologies and tracking devices, the emergence of technologies for robotic natural orifice transluminal endoscopic surgery and single-port surgery, advances in flexible robotics and haptics, the development of new virtual reality simulator training platforms compatible with the existing da Vinci system, and recent experiences with remote robotic surgery and telestration.

  4. Indoor Positioning System Using Magnetic Field Map Navigation and an Encoder System

    PubMed Central

    Kim, Han-Sol; Seo, Woojin; Baek, Kwang-Ryul

    2017-01-01

    In the indoor environment, variation of the magnetic field is caused by building structures, and magnetic field map navigation is based on this feature. In order to estimate position using this navigation, a three-axis magnetic field must be measured at every point to build a magnetic field map. After the magnetic field map is obtained, the position of the mobile robot can be estimated with a likelihood function whereby the measured magnetic field data and the magnetic field map are used. However, if only magnetic field map navigation is used, the estimated position can have large errors. In order to improve performance, we propose a particle filter system that integrates magnetic field map navigation and an encoder system. In this paper, multiple magnetic sensors and three magnetic field maps (a horizontal intensity map, a vertical intensity map, and a direction information map) are used to update the weights of particles. As a result, the proposed system estimates the position and orientation of a mobile robot more accurately than previous systems. Also, when the number of magnetic sensors increases, this paper shows that system performance improves. Finally, experiment results are shown from the proposed system that was implemented and evaluated. PMID:28327513

  5. Indoor Positioning System Using Magnetic Field Map Navigation and an Encoder System.

    PubMed

    Kim, Han-Sol; Seo, Woojin; Baek, Kwang-Ryul

    2017-03-22

    In the indoor environment, variation of the magnetic field is caused by building structures, and magnetic field map navigation is based on this feature. In order to estimate position using this navigation, a three-axis magnetic field must be measured at every point to build a magnetic field map. After the magnetic field map is obtained, the position of the mobile robot can be estimated with a likelihood function whereby the measured magnetic field data and the magnetic field map are used. However, if only magnetic field map navigation is used, the estimated position can have large errors. In order to improve performance, we propose a particle filter system that integrates magnetic field map navigation and an encoder system. In this paper, multiple magnetic sensors and three magnetic field maps (a horizontal intensity map, a vertical intensity map, and a direction information map) are used to update the weights of particles. As a result, the proposed system estimates the position and orientation of a mobile robot more accurately than previous systems. Also, when the number of magnetic sensors increases, this paper shows that system performance improves. Finally, experiment results are shown from the proposed system that was implemented and evaluated.

  6. Adding navigation, artificial audition and vital sign monitoring capabilities to a telepresence mobile robot for remote home care applications.

    PubMed

    Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois

    2017-07-01

    A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.

  7. Ratbot automatic navigation by electrical reward stimulation based on distance measurement in unknown environments.

    PubMed

    Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.

  8. New Control Paradigms for Resources Saving: An Approach for Mobile Robots Navigation.

    PubMed

    Socas, Rafael; Dormido, Raquel; Dormido, Sebastián

    2018-01-18

    In this work, an event-based control scheme is presented. The proposed system has been developed to solve control problems appearing in the field of Networked Control Systems (NCS). Several models and methodologies have been proposed to measure different resources consumptions. The use of bandwidth, computational load and energy resources have been investigated. This analysis shows how the parameters of the system impacts on the resources efficiency. Moreover, the proposed system has been compared with its equivalent discrete-time solution. In the experiments, an application of NCS for mobile robots navigation has been set up and its resource usage efficiency has been analysed.

  9. New Control Paradigms for Resources Saving: An Approach for Mobile Robots Navigation

    PubMed Central

    2018-01-01

    In this work, an event-based control scheme is presented. The proposed system has been developed to solve control problems appearing in the field of Networked Control Systems (NCS). Several models and methodologies have been proposed to measure different resources consumptions. The use of bandwidth, computational load and energy resources have been investigated. This analysis shows how the parameters of the system impacts on the resources efficiency. Moreover, the proposed system has been compared with its equivalent discrete-time solution. In the experiments, an application of NCS for mobile robots navigation has been set up and its resource usage efficiency has been analysed. PMID:29346321

  10. Using sensor habituation in mobile robots to reduce oscillatory movements in narrow corridors.

    PubMed

    Chang, Carolina

    2005-11-01

    Habituation is a form of nonassociative learning observed in a variety of species of animals. Arguably, it is the simplest form of learning. Nonetheless, the ability to habituate to certain stimuli implies plastic neural systems and adaptive behaviors. This paper describes how computational models of habituation can be applied to real robots. In particular, we discuss the problem of the oscillatory movements observed when a Khepera robot navigates through narrow hallways using a biologically inspired neurocontroller. Results show that habituation to the proximity of the walls can lead to smoother navigation. Habituation to sensory stimulation to the sides of the robot does not interfere with the robot's ability to turn at dead ends and to avoid obstacles outside the hallway. This paper shows that simple biological mechanisms of learning can be adapted to achieve better performance in real mobile robots.

  11. Ultra Wide-Band Localization and SLAM: A Comparative Study for Mobile Robot Navigation

    PubMed Central

    Segura, Marcelo J.; Auat Cheein, Fernando A.; Toibero, Juan M.; Mut, Vicente; Carelli, Ricardo

    2011-01-01

    In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work. PMID:22319397

  12. Obstacle negotiation control for a mobile robot suspended on overhead ground wires by optoelectronic sensors

    NASA Astrophysics Data System (ADS)

    Zheng, Li; Yi, Ruan

    2009-11-01

    Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.

  13. Immune systems are not just for making you feel better: they are for controlling autonomous robots

    NASA Astrophysics Data System (ADS)

    Rosenblum, Mark

    2005-05-01

    The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.

  14. Kinematic analysis and simulation of a substation inspection robot guided by magnetic sensor

    NASA Astrophysics Data System (ADS)

    Xiao, Peng; Luan, Yiqing; Wang, Haipeng; Li, Li; Li, Jianxiang

    2017-01-01

    In order to improve the performance of the magnetic navigation system used by substation inspection robot, the kinematic characteristics is analyzed based on a simplified magnetic guiding system model, and then the simulation process is executed to verify the reasonability of the whole analysis procedure. Finally, some suggestions are extracted out, which will be helpful to guide the design of the inspection robot system in the future.

  15. Multidisciplinary approach for developing a new robotic system for domiciliary assistance to elderly people.

    PubMed

    Cavallo, F; Aquilano, M; Bonaccorsi, M; Mannari, I; Carrozza, M C; Dario, P

    2011-01-01

    This paper aims to show the effectiveness of a (inter / multi)disciplinary team, based on the technology developers, elderly care organizations, and designers, in developing the ASTRO robotic system for domiciliary assistance to elderly people. The main issues presented in this work concern the improvement of robot's behavior by means of a smart sensor network able to share information with the robot for localization and navigation, and the design of the robot's appearance and functionalities by means of a substantial analysis of users' requirements and attitude to robotic technology to improve acceptability and usability.

  16. Open core control software for surgical robots.

    PubMed

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.

  17. Interaction dynamics of multiple mobile robots with simple navigation strategies

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1989-01-01

    The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.

  18. Trajectory and navigation system design for robotic and piloted missions to Mars

    NASA Technical Reports Server (NTRS)

    Thurman, S. W.; Matousek, S. E.

    1991-01-01

    Future Mars exploration missions, both robotic and piloted, may utilize Earth to Mars transfer trajectories that are significantly different from one another, depending upon the type of mission being flown and the time period during which the flight takes place. The use of new or emerging technologies for future missions to Mars, such as aerobraking and nuclear rocket propulsion, may yield navigation requirements that are much more stringent than those of past robotic missions, and are very difficult to meet for some trajectories. This article explores the interdependencies between the properties of direct Earth to Mars trajectories and the Mars approach navigation accuracy that can be achieved using different radio metric data types, such as ranging measurements between an approaching spacecraft and Mars orbiting relay satellites, or Earth based measurements such as coherent Doppler and very long baseline interferometry. The trajectory characteristics affecting navigation performance are identified, and the variations in accuracy that might be experienced over the range of different Mars approach trajectories are discussed. The results predict that three sigma periapsis altitude navigation uncertainties of 2 to 10 km can be achieved when a Mars orbiting satellite is used as a navigation aid.

  19. Learning Probabilistic Features for Robotic Navigation Using Laser Sensors

    PubMed Central

    Aznar, Fidel; Pujol, Francisco A.; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N 2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used. PMID:25415377

  20. Learning probabilistic features for robotic navigation using laser sensors.

    PubMed

    Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

  1. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1990-01-01

    A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.

  2. Evolutionary Fuzzy Control and Navigation for Two Wheeled Robots Cooperatively Carrying an Object in Unknown Environments.

    PubMed

    Juang, Chia-Feng; Lai, Min-Ge; Zeng, Wan-Ting

    2015-09-01

    This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.

  3. Bio-robots automatic navigation with graded electric reward stimulation based on Reinforcement Learning.

    PubMed

    Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.

  4. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  5. Symbiotic Navigation in Multi-Robot Systems with Remote Obstacle Knowledge Sharing

    PubMed Central

    Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori

    2017-01-01

    Large scale operational areas often require multiple service robots for coverage and task parallelism. In such scenarios, each robot keeps its individual map of the environment and serves specific areas of the map at different times. We propose a knowledge sharing mechanism for multiple robots in which one robot can inform other robots about the changes in map, like path blockage, or new static obstacles, encountered at specific areas of the map. This symbiotic information sharing allows the robots to update remote areas of the map without having to explicitly navigate those areas, and plan efficient paths. A node representation of paths is presented for seamless sharing of blocked path information. The transience of obstacles is modeled to track obstacles which might have been removed. A lazy information update scheme is presented in which only relevant information affecting the current task is updated for efficiency. The advantages of the proposed method for path planning are discussed against traditional method with experimental results in both simulation and real environments. PMID:28678193

  6. Videoexoscopic real-time intraoperative navigation for spinal neurosurgery: a novel co-adaptation of two existing technology platforms, technical note.

    PubMed

    Huang, Meng; Barber, Sean Michael; Steele, William James; Boghani, Zain; Desai, Viren Rajendrakumar; Britz, Gavin Wayne; West, George Alexander; Trask, Todd Wilson; Holman, Paul Joseph

    2018-06-01

    Image-guided approaches to spinal instrumentation and interbody fusion have been widely popularized in the last decade [1-5]. Navigated pedicle screws are significantly less likely to breach [2, 3, 5, 6]. Navigation otherwise remains a point reference tool because the projection is off-axis to the surgeon's inline loupe or microscope view. The Synaptive robotic brightmatter drive videoexoscope monitor system represents a new paradigm for off-axis high-definition (HD) surgical visualization. It has many advantages over the traditional microscope and loupes, which have already been demonstrated in a cadaveric study [7]. An auxiliary, but powerful capability of this system is projection of a second, modifiable image in a split-screen configuration. We hypothesized that integration of both Medtronic and Synaptive platforms could permit the visualization of reconstructed navigation and surgical field images simultaneously. By utilizing navigated instruments, this configuration has the ability to support live image-guided surgery or real-time navigation (RTN). Medtronic O-arm/Stealth S7 navigation, MetRx, NavLock, and SureTrak spinal systems were implemented on a prone cadaveric specimen with a stream output to the Synaptive Display. Surgical visualization was provided using a Storz Image S1 platform and camera mounted to the Synaptive robotic brightmatter drive. We were able to successfully technically co-adapt both platforms. A minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) and an open pedicle subtraction osteotomy (PSO) were performed using a navigated high-speed drill under RTN. Disc Shaver and Trials under RTN were implemented on the MIS TLIF. The synergy of Synaptive HD videoexoscope robotic drive and Medtronic Stealth platforms allow for live image-guided surgery or real-time navigation (RTN). Off-axis projection also allows upright neutral cervical spine operative ergonomics for the surgeons and improved surgical team visualization and education compared to traditional means. This technique has the potential to augment existing minimally invasive and open approaches, but will require long-term outcome measurements for efficacy.

  7. Iconic memory-based omnidirectional route panorama navigation.

    PubMed

    Yagi, Yasushi; Imai, Kousuke; Tsuji, Kentaro; Yachida, Masahiko

    2005-01-01

    A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.

  8. Passive mapping and intermittent exploration for mobile robots

    NASA Technical Reports Server (NTRS)

    Engleson, Sean P.

    1994-01-01

    An adaptive state space architecture is combined with diktiometric representation to provide the framework for designing a robot mapping system with flexible navigation planning tasks. This involves indexing waypoints described as expectations, geometric indexing, and perceptual indexing. Matching and updating the robot's projected position and sensory inputs with indexing waypoints involves matchers, dynamic priorities, transients, and waypoint restructuring. The robot's map learning can be opganized around the principles of passive mapping.

  9. Automatic Operation For A Robot Lawn Mower

    NASA Astrophysics Data System (ADS)

    Huang, Y. Y.; Cao, Z. L.; Oh, S. J.; Kattan, E. U.; Hall, E. L.

    1987-02-01

    A domestic mobile robot, lawn mower, which performs the automatic operation mode, has been built up in the Center of Robotics Research, University of Cincinnati. The robot lawn mower automatically completes its work with the region filling operation, a new kind of path planning for mobile robots. Some strategies for region filling of path planning have been developed for a partly-known or a unknown environment. Also, an advanced omnidirectional navigation system and a multisensor-based control system are used in the automatic operation. Research on the robot lawn mower, especially on the region filling of path planning, is significant in industrial and agricultural applications.

  10. Navigation of a robot-integrated fluorescence laparoscope in preoperative SPECT/CT and intraoperative freehand SPECT imaging data: a phantom study

    NASA Astrophysics Data System (ADS)

    van Oosterom, Matthias Nathanaël; Engelen, Myrthe Adriana; van den Berg, Nynke Sjoerdtje; KleinJan, Gijs Hendrik; van der Poel, Henk Gerrit; Wendler, Thomas; van de Velde, Cornelis Jan Hadde; Navab, Nassir; van Leeuwen, Fijs Willem Bernhard

    2016-08-01

    Robot-assisted laparoscopic surgery is becoming an established technique for prostatectomy and is increasingly being explored for other types of cancer. Linking intraoperative imaging techniques, such as fluorescence guidance, with the three-dimensional insights provided by preoperative imaging remains a challenge. Navigation technologies may provide a solution, especially when directly linked to both the robotic setup and the fluorescence laparoscope. We evaluated the feasibility of such a setup. Preoperative single-photon emission computed tomography/X-ray computed tomography (SPECT/CT) or intraoperative freehand SPECT (fhSPECT) scans were used to navigate an optically tracked robot-integrated fluorescence laparoscope via an augmented reality overlay in the laparoscopic video feed. The navigation accuracy was evaluated in soft tissue phantoms, followed by studies in a human-like torso phantom. Navigation accuracies found for SPECT/CT-based navigation were 2.25 mm (coronal) and 2.08 mm (sagittal). For fhSPECT-based navigation, these were 1.92 mm (coronal) and 2.83 mm (sagittal). All errors remained below the <1-cm detection limit for fluorescence imaging, allowing refinement of the navigation process using fluorescence findings. The phantom experiments performed suggest that SPECT-based navigation of the robot-integrated fluorescence laparoscope is feasible and may aid fluorescence-guided surgery procedures.

  11. A traffic priority language for collision-free navigation of autonomous mobile robots in dynamic environments.

    PubMed

    Bourbakis, N G

    1997-01-01

    This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.

  12. Multigait soft robot

    PubMed Central

    Shepherd, Robert F.; Ilievski, Filip; Choi, Wonjae; Morin, Stephen A.; Stokes, Adam A.; Mazzeo, Aaron D.; Chen, Xin; Wang, Michael; Whitesides, George M.

    2011-01-01

    This manuscript describes a unique class of locomotive robot: A soft robot, composed exclusively of soft materials (elastomeric polymers), which is inspired by animals (e.g., squid, starfish, worms) that do not have hard internal skeletons. Soft lithography was used to fabricate a pneumatically actuated robot capable of sophisticated locomotion (e.g., fluid movement of limbs and multiple gaits). This robot is quadrupedal; it uses no sensors, only five actuators, and a simple pneumatic valving system that operates at low pressures (< 10 psi). A combination of crawling and undulation gaits allowed this robot to navigate a difficult obstacle. This demonstration illustrates an advantage of soft robotics: They are systems in which simple types of actuation produce complex motion. PMID:22123978

  13. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.

  14. Highly dexterous 2-module soft robot for intra-organ navigation in minimally invasive surgery.

    PubMed

    Abidi, Haider; Gerboni, Giada; Brancadoro, Margherita; Fras, Jan; Diodato, Alessandro; Cianchetti, Matteo; Wurdemann, Helge; Althoefer, Kaspar; Menciassi, Arianna

    2018-02-01

    For some surgical interventions, like the Total Mesorectal Excision (TME), traditional laparoscopes lack the flexibility to safely maneuver and reach difficult surgical targets. This paper answers this need through designing, fabricating and modelling a highly dexterous 2-module soft robot for minimally invasive surgery (MIS). A soft robotic approach is proposed that uses flexible fluidic actuators (FFAs) allowing highly dexterous and inherently safe navigation. Dexterity is provided by an optimized design of fluid chambers within the robot modules. Safe physical interaction is ensured by fabricating the entire structure by soft and compliant elastomers, resulting in a squeezable 2-module robot. An inner free lumen/chamber along the central axis serves as a guide of flexible endoscopic tools. A constant curvature based inverse kinematics model is also proposed, providing insight into the robot capabilities. Experimental tests in a surgical scenario using a cadaver model are reported, demonstrating the robot advantages over standard systems in a realistic MIS environment. Simulations and experiments show the efficacy of the proposed soft robot. Copyright © 2017 John Wiley & Sons, Ltd.

  15. ARK: Autonomous mobile robot in an industrial environment

    NASA Technical Reports Server (NTRS)

    Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.

    1994-01-01

    This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.

  16. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  17. Goal-oriented robot navigation learning using a multi-scale space representation.

    PubMed

    Llofriu, M; Tejera, G; Contreras, M; Pelc, T; Fellous, J M; Weitzenfeld, A

    2015-12-01

    There has been extensive research in recent years on the multi-scale nature of hippocampal place cells and entorhinal grid cells encoding which led to many speculations on their role in spatial cognition. In this paper we focus on the multi-scale nature of place cells and how they contribute to faster learning during goal-oriented navigation when compared to a spatial cognition system composed of single scale place cells. The task consists of a circular arena with a fixed goal location, in which a robot is trained to find the shortest path to the goal after a number of learning trials. Synaptic connections are modified using a reinforcement learning paradigm adapted to the place cells multi-scale architecture. The model is evaluated in both simulation and physical robots. We find that larger scale and combined multi-scale representations favor goal-oriented navigation task learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  19. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots.

    PubMed

    Sherwin, Tyrone; Easte, Mikala; Chen, Andrew Tzer-Yeu; Wang, Kevin I-Kai; Dai, Wenbin

    2018-02-14

    Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.

  20. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots

    PubMed Central

    Sherwin, Tyrone; Easte, Mikala; Wang, Kevin I-Kai; Dai, Wenbin

    2018-01-01

    Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system. PMID:29443906

  1. Learning classifier systems for single and multiple mobile robots in unstructured environments

    NASA Astrophysics Data System (ADS)

    Bay, John S.

    1995-12-01

    The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.

  2. Navigation system for robot-assisted intra-articular lower-limb fracture surgery.

    PubMed

    Dagnino, Giulio; Georgilas, Ioannis; Köhler, Paul; Morad, Samir; Atkins, Roger; Dogramadzi, Sanja

    2016-10-01

    In the surgical treatment for lower-leg intra-articular fractures, the fragments have to be positioned and aligned to reconstruct the fractured bone as precisely as possible, to allow the joint to function correctly again. Standard procedures use 2D radiographs to estimate the desired reduction position of bone fragments. However, optimal correction in a 3D space requires 3D imaging. This paper introduces a new navigation system that uses pre-operative planning based on 3D CT data and intra-operative 3D guidance to virtually reduce lower-limb intra-articular fractures. Physical reduction in the fractures is then performed by our robotic system based on the virtual reduction. 3D models of bone fragments are segmented from CT scan. Fragments are pre-operatively visualized on the screen and virtually manipulated by the surgeon through a dedicated GUI to achieve the virtual reduction in the fracture. Intra-operatively, the actual position of the bone fragments is provided by an optical tracker enabling real-time 3D guidance. The motion commands for the robot connected to the bone fragment are generated, and the fracture physically reduced based on the surgeon's virtual reduction. To test the system, four femur models were fractured to obtain four different distal femur fracture types. Each one of them was subsequently reduced 20 times by a surgeon using our system. The navigation system allowed an orthopaedic surgeon to virtually reduce the fracture with a maximum residual positioning error of [Formula: see text] (translational) and [Formula: see text] (rotational). Correspondent physical reductions resulted in an accuracy of 1.03 ± 0.2 mm and [Formula: see text], when the robot reduced the fracture. Experimental outcome demonstrates the accuracy and effectiveness of the proposed navigation system, presenting a fracture reduction accuracy of about 1 mm and [Formula: see text], and meeting the clinical requirements for distal femur fracture reduction procedures.

  3. Behavior coordination of mobile robotics using supervisory control of fuzzy discrete event systems.

    PubMed

    Jayasiri, Awantha; Mann, George K I; Gosine, Raymond G

    2011-10-01

    In order to incorporate the uncertainty and impreciseness present in real-world event-driven asynchronous systems, fuzzy discrete event systems (DESs) (FDESs) have been proposed as an extension to crisp DESs. In this paper, first, we propose an extension to the supervisory control theory of FDES by redefining fuzzy controllable and uncontrollable events. The proposed supervisor is capable of enabling feasible uncontrollable and controllable events with different possibilities. Then, the extended supervisory control framework of FDES is employed to model and control several navigational tasks of a mobile robot using the behavior-based approach. The robot has limited sensory capabilities, and the navigations have been performed in several unmodeled environments. The reactive and deliberative behaviors of the mobile robotic system are weighted through fuzzy uncontrollable and controllable events, respectively. By employing the proposed supervisory controller, a command-fusion-type behavior coordination is achieved. The observability of fuzzy events is incorporated to represent the sensory imprecision. As a systematic analysis of the system, a fuzzy-state-based controllability measure is introduced. The approach is implemented in both simulation and real time. A performance evaluation is performed to quantitatively estimate the validity of the proposed approach over its counterparts.

  4. Coordinating sensing and local navigation

    NASA Technical Reports Server (NTRS)

    Slack, Marc G.

    1991-01-01

    Based on Navigation Templates (or NaTs), this work presents a new paradigm for local navigation which addresses the noisy and uncertain nature of sensor data. Rather than creating a new navigation plan each time the robot's perception of the world changes, the technique incorporates perceptual changes directly into the existing navigation plan. In this way, the robot's navigation plan is quickly and continuously modified, resulting in actions that remain coordinated with its changing perception of the world.

  5. Landmark-based robust navigation for tactical UGV control in GPS-denied communication-degraded environments

    NASA Astrophysics Data System (ADS)

    Endo, Yoichiro; Balloch, Jonathan C.; Grushin, Alexander; Lee, Mun Wai; Handelman, David

    2016-05-01

    Control of current tactical unmanned ground vehicles (UGVs) is typically accomplished through two alternative modes of operation, namely, low-level manual control using joysticks and high-level planning-based autonomous control. Each mode has its own merits as well as inherent mission-critical disadvantages. Low-level joystick control is vulnerable to communication delay and degradation, and high-level navigation often depends on uninterrupted GPS signals and/or energy-emissive (non-stealth) range sensors such as LIDAR for localization and mapping. To address these problems, we have developed a mid-level control technique where the operator semi-autonomously drives the robot relative to visible landmarks that are commonly recognizable by both humans and machines such as closed contours and structured lines. Our novel solution relies solely on optical and non-optical passive sensors and can be operated under GPS-denied, communication-degraded environments. To control the robot using these landmarks, we developed an interactive graphical user interface (GUI) that allows the operator to select landmarks in the robot's view and direct the robot relative to one or more of the landmarks. The integrated UGV control system was evaluated based on its ability to robustly navigate through indoor environments. The system was successfully field tested with QinetiQ North America's TALON UGV and Tactical Robot Controller (TRC), a ruggedized operator control unit (OCU). We found that the proposed system is indeed robust against communication delay and degradation, and provides the operator with steady and reliable control of the UGV in realistic tactical scenarios.

  6. Enabling Interoperable Space Robots With the Joint Technical Architecture for Robotic Systems (JTARS)

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur; Dubowsky, Steven; Quinn, Roger; Marzwell, Neville

    2005-01-01

    Robots that operate independently of one another will not be adequate to accomplish the future exploration tasks of long-distance autonomous navigation, habitat construction, resource discovery, and material handling. Such activities will require that systems widely share information, plan and divide complex tasks, share common resources, and physically cooperate to manipulate objects. Recognizing the need for interoperable robots to accomplish the new exploration initiative, NASA s Office of Exploration Systems Research & Technology recently funded the development of the Joint Technical Architecture for Robotic Systems (JTARS). JTARS charter is to identify the interface standards necessary to achieve interoperability among space robots. A JTARS working group (JTARS-WG) has been established comprising recognized leaders in the field of space robotics including representatives from seven NASA centers along with academia and private industry. The working group s early accomplishments include addressing key issues required for interoperability, defining which systems are within the project s scope, and framing the JTARS manuals around classes of robotic systems.

  7. Development of a Novel Locomotion Algorithm for Snake Robot

    NASA Astrophysics Data System (ADS)

    Khan, Raisuddin; Masum Billah, Md; Watanabe, Mitsuru; Shafie, A. A.

    2013-12-01

    A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation.

  8. Spatial abstraction for autonomous robot navigation.

    PubMed

    Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon

    2015-09-01

    Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.

  9. Reconnaissance and Autonomy for Small Robots (RASR)

    DTIC Science & Technology

    2012-06-29

    The Reconnaissance and Autonomy for Small Robots (RASR) team developed a system for the coordination of groups of unmanned ground vehicles (UGVs...development of a system that used 1) a relevant deployable platform; 2) a minimum set of relatively inexpensive navigation and LADAR sensors; 3) an...expandable and modular control system with innovative software algorithms to minimize computing footprint; and that minimized 4) required communications

  10. Navigation and Robotics in Spinal Surgery: Where Are We Now?

    PubMed

    Overley, Samuel C; Cho, Samuel K; Mehta, Ankit I; Arnold, Paul M

    2017-03-01

    Spine surgery has experienced much technological innovation over the past several decades. The field has seen advancements in operative techniques, implants and biologics, and equipment such as computer-assisted navigation and surgical robotics. With the arrival of real-time image guidance and navigation capabilities along with the computing ability to process and reconstruct these data into an interactive three-dimensional spinal "map", so too have the applications of surgical robotic technology. While spinal robotics and navigation represent promising potential for improving modern spinal surgery, it remains paramount to demonstrate its superiority as compared to traditional techniques prior to assimilation of its use amongst surgeons.The applications for intraoperative navigation and image-guided robotics have expanded to surgical resection of spinal column and intradural tumors, revision procedures on arthrodesed spines, and deformity cases with distorted anatomy. Additionally, these platforms may mitigate much of the harmful radiation exposure in minimally invasive surgery to which the patient, surgeon, and ancillary operating room staff are subjected.Spine surgery relies upon meticulous fine motor skills to manipulate neural elements and a steady hand while doing so, often exploiting small working corridors utilizing exposures that minimize collateral damage. Additionally, the procedures may be long and arduous, predisposing the surgeon to both mental and physical fatigue. In light of these characteristics, spine surgery may actually be an ideal candidate for the integration of navigation and robotic-assisted procedures.With this paper, we aim to critically evaluate the current literature and explore the options available for intraoperative navigation and robotic-assisted spine surgery. Copyright © 2016 by the Congress of Neurological Surgeons.

  11. Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments

    DTIC Science & Technology

    2016-09-01

    yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G

  12. Control of free-flying space robot manipulator systems

    NASA Technical Reports Server (NTRS)

    Cannon, Robert H., Jr.

    1977-01-01

    To accelerate the development of multi-armed, free-flying satellite manipulators, a fixed-base cooperative manipulation facility is being developed. The work performed on multiple arm cooperation on a free-flying robot is summarized. Research is also summarized on global navigation and control of free-flying space robots. The Locomotion Enhancement via Arm Pushoff (LEAP) approach is described and progress to date is presented.

  13. Open core control software for surgical robots

    PubMed Central

    Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B.; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-01-01

    Object In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge “intelligent surgical robot” will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are “home-made” in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. Materials and methods In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a “force guide” for supporting operators to perform precise manipulation by using a master–slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. Results The Open Core Control software was implemented on a surgical master–slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a “force guide” on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. Conclusion In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement “General Principles of Software Validation” or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field. PMID:20033506

  14. Intelligent control and adaptive systems; Proceedings of the Meeting, Philadelphia, PA, Nov. 7, 8, 1989

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor)

    1990-01-01

    Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.

  15. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    PubMed

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-09-26

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  16. Markovian robots: Minimal navigation strategies for active particles

    NASA Astrophysics Data System (ADS)

    Nava, Luis Gómez; Großmann, Robert; Peruani, Fernando

    2018-04-01

    We explore minimal navigation strategies for active particles in complex, dynamical, external fields, introducing a class of autonomous, self-propelled particles which we call Markovian robots (MR). These machines are equipped with a navigation control system (NCS) that triggers random changes in the direction of self-propulsion of the robots. The internal state of the NCS is described by a Boolean variable that adopts two values. The temporal dynamics of this Boolean variable is dictated by a closed Markov chain—ensuring the absence of fixed points in the dynamics—with transition rates that may depend exclusively on the instantaneous, local value of the external field. Importantly, the NCS does not store past measurements of this value in continuous, internal variables. We show that despite the strong constraints, it is possible to conceive closed Markov chain motifs that lead to nontrivial motility behaviors of the MR in one, two, and three dimensions. By analytically reducing the complexity of the NCS dynamics, we obtain an effective description of the long-time motility behavior of the MR that allows us to identify the minimum requirements in the design of NCS motifs and transition rates to perform complex navigation tasks such as adaptive gradient following, detection of minima or maxima, or selection of a desired value in a dynamical, external field. We put these ideas in practice by assembling a robot that operates by the proposed minimalistic NCS to evaluate the robustness of MR, providing a proof of concept that is possible to navigate through complex information landscapes with such a simple NCS whose internal state can be stored in one bit. These ideas may prove useful for the engineering of miniaturized robots.

  17. Composite Configuration Interventional Therapy Robot for the Microwave Ablation of Liver Tumors

    NASA Astrophysics Data System (ADS)

    Cao, Ying-Yu; Xue, Long; Qi, Bo-Jin; Jiang, Li-Pei; Deng, Shuang-Cheng; Liang, Ping; Liu, Jia

    2017-11-01

    The existing interventional therapy robots for the microwave ablation of liver tumors have a poor clinical applicability with a large volume, low positioning speed and complex automatic navigation control. To solve above problems, a composite configuration interventional therapy robot with passive and active joints is developed. The design of composite configuration reduces the size of the robot under the premise of a wide range of movement, and the robot with composite configuration can realizes rapid positioning with operation safety. The cumulative error of positioning is eliminated and the control complexity is reduced by decoupling active parts. The navigation algorithms for the robot are proposed based on solution of the inverse kinematics and geometric analysis. A simulation clinical test method is designed for the robot, and the functions of the robot and the navigation algorithms are verified by the test method. The mean error of navigation is 1.488 mm and the maximum error is 2.056 mm, and the positioning time for the ablation needle is in 10 s. The experimental results show that the designed robot can meet the clinical requirements for the microwave ablation of liver tumors. The composite configuration is proposed in development of the interventional therapy robot for the microwave ablation of liver tumors, which provides a new idea for the structural design of medical robots.

  18. Ultra-Wideband Tracking System Design for Relative Navigation

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David; Arndt, Dickey; Bgo, Phong; Dekome, Kent; Dusl, John

    2011-01-01

    This presentation briefly discusses a design effort for a prototype ultra-wideband (UWB) time-difference-of-arrival (TDOA) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being designed for use in localization and navigation of a rover in a GPS deprived environment for surface missions. In one application enabled by the UWB tracking, a robotic vehicle carrying equipments can autonomously follow a crewed rover from work site to work site such that resources can be carried from one landing mission to the next thereby saving up-mass. The UWB Systems Group at JSC has developed a UWB TDOA High Resolution Proximity Tracking System which can achieve sub-inch tracking accuracy of a target within the radius of the tracking baseline [1]. By extending the tracking capability beyond the radius of the tracking baseline, a tracking system is being designed to enable relative navigation between two vehicles for surface missions. A prototype UWB TDOA tracking system has been designed, implemented, tested, and proven feasible for relative navigation of robotic vehicles. Future work includes testing the system with the application code to increase the tracking update rate and evaluating the linear tracking baseline to improve the flexibility of antenna mounting on the following vehicle.

  19. Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation

    PubMed Central

    Masmoudi, Mohamed Slim; Masmoudi, Mohamed

    2016-01-01

    This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path. PMID:27688748

  20. Cognitive memory and mapping in a brain-like system for robotic navigation.

    PubMed

    Tang, Huajin; Huang, Weiwei; Narayanamoorthy, Aditya; Yan, Rui

    2017-03-01

    Electrophysiological studies in animals may provide a great insight into developing brain-like models of spatial cognition for robots. These studies suggest that the spatial ability of animals requires proper functioning of the hippocampus and the entorhinal cortex (EC). The involvement of the hippocampus in spatial cognition has been extensively studied, both in animal as well as in theoretical studies, such as in the brain-based models by Edelman and colleagues. In this work, we extend these earlier models, with a particular focus on the spatial coding properties of the EC and how it functions as an interface between the hippocampus and the neocortex, as proposed by previous work. By realizing the cognitive memory and mapping functions of the hippocampus and the EC, respectively, we develop a neurobiologically-inspired system to enable a mobile robot to perform task-based navigation in a maze environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Under-vehicle autonomous inspection through undercarriage signatures

    NASA Astrophysics Data System (ADS)

    Schoenherr, Edward; Smuda, Bill

    2005-05-01

    Increased threats to gate security have caused recent need for improved vehicle inspection methods at security checkpoints in various fields of defense and security. A fast, reliable system of under-vehicle inspection that detects possibly harmful or unwanted materials hidden on vehicle undercarriages and notifies the user of the presence of these materials while allowing the user a safe standoff distance from the inspection site is desirable. An autonomous under-vehicle inspection system would provide for this. The proposed system would function as follows: A low-clearance tele-operated robotic platform would be equipped with sonar/laser range finding sensors as well as a video camera. As a vehicle to be inspected enters a checkpoint, the robot would autonomously navigate under the vehicle, using algorithms to detect tire locations for weigh points. During this navigation, data would be collected from the sonar/laser range finding hardware. This range data would be used to compile an impression of the vehicle undercarriage. Once this impression is complete, the system would compare it to a database of pre-scanned undercarriage impressions. Based on vehicle makes and models, any variance between the undercarriage being inspected and the impression compared against in the database would be marked as potentially threatening. If such variances exist, the robot would navigate to these locations and place the video camera in such a manner that the location in question can be viewed from a standoff position through a TV monitor. At this time, manual control of the robot navigation and camera control can be taken to imply further, more detailed inspection of the area/materials in question. After-market vehicle modifications would provide some difficulty, yet with enough pre-screening of such modifications, the system should still prove accurate. Also, impression scans that are taken in the field can be stored and tagged with a vehicles's license plate number, and future inspections of that vehicle can be compared to already screened and cleared impressions of the same vehicle in order to search for variance.

  2. Mobile robot navigation modulated by artificial emotions.

    PubMed

    Lee-Johnson, C P; Carnegie, D A

    2010-04-01

    For artificial intelligence research to progress beyond the highly specialized task-dependent implementations achievable today, researchers may need to incorporate aspects of biological behavior that have not traditionally been associated with intelligence. Affective processes such as emotions may be crucial to the generalized intelligence possessed by humans and animals. A number of robots and autonomous agents have been created that can emulate human emotions, but the majority of this research focuses on the social domain. In contrast, we have developed a hybrid reactive/deliberative architecture that incorporates artificial emotions to improve the general adaptive performance of a mobile robot for a navigation task. Emotions are active on multiple architectural levels, modulating the robot's decisions and actions to suit the context of its situation. Reactive emotions interact with the robot's control system, altering its parameters in response to appraisals from short-term sensor data. Deliberative emotions are learned associations that bias path planning in response to eliciting objects or events. Quantitative results are presented that demonstrate situations in which each artificial emotion can be beneficial to performance.

  3. Online Learning Techniques for Improving Robot Navigation in Unfamiliar Domains

    DTIC Science & Technology

    2010-12-01

    In In Proceedings of the 1996 Symposium on Human Interaction and Complex Systems, pages 276–283, 1996. 6.1 [15] Colin Campbell and Kristin P. Bennett...ISBN 0-262-19450-3. 5.1 [104] Jean Scholtz, Jeff Young, Jill L. Drury , and Holly A. Yanco. Evaluation of human-robot interaction awareness in search...2004. 6.1 [147] Holly A. Yanco and Jill L. Drury . Rescuing interfaces: A multi-year study of human-robot interaction at the AAAI robot rescue

  4. Determining navigability of terrain using point cloud data.

    PubMed

    Cockrell, Stephanie; Lee, Gregory; Newman, Wyatt

    2013-06-01

    This paper presents an algorithm to identify features of the navigation surface in front of a wheeled robot. Recent advances in mobile robotics have brought about the development of smart wheelchairs to assist disabled people, allowing them to be more independent. These robots have a human occupant and operate in real environments where they must be able to detect hazards like holes, stairs, or obstacles. Furthermore, to ensure safe navigation, wheelchairs often need to locate and navigate on ramps. The algorithm is implemented on data from a Kinect and can effectively identify these features, increasing occupant safety and allowing for a smoother ride.

  5. Neural Network-Based Landmark Recognition and Navigation with IAMRs. Understanding the Principles of Thought and Behavior.

    ERIC Educational Resources Information Center

    Doty, Keith L.

    1999-01-01

    Research on neural networks and hippocampal function demonstrating how mammals construct mental maps and develop navigation strategies is being used to create Intelligent Autonomous Mobile Robots (IAMRs). Such robots are able to recognize landmarks and navigate without "vision." (SK)

  6. Learning for autonomous navigation

    NASA Technical Reports Server (NTRS)

    Angelova, Anelia; Howard, Andrew; Matthies, Larry; Tang, Benyang; Turmon, Michael; Mjolsness, Eric

    2005-01-01

    Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of 3-D sensors and by the difficulty of preprogramming systems to understand the traversability of the wide variety of terrain they can encounter.

  7. Proceedings of the 1989 CESAR/CEA (Center for Engineering Systems Advanced Research/Commissariat a l'Energie Atomique) workshop on autonomous mobile robots (May 30--June 1, 1989)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harber, K.S.; Pin, F.G.

    1990-03-01

    The US DOE Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) and the Commissariat a l'Energie Atomique's (CEA) Office de Robotique et Productique within the Directorat a la Valorization are working toward a long-term cooperative agreement and relationship in the area of Intelligent Systems Research (ISR). This report presents the proceedings of the first CESAR/CEA Workshop on Autonomous Mobile Robots which took place at ORNL on May 30, 31 and June 1, 1989. The purpose of the workshop was to present and discuss methodologies and algorithms under development at the two facilities in themore » area of perception and navigation for autonomous mobile robots in unstructured environments. Experimental demonstration of the algorithms and comparison of some of their features were proposed to take place within the framework of a previously mutually agreed-upon demonstration scenario or base-case.'' The base-case scenario described in detail in Appendix A, involved autonomous navigation by the robot in an a priori unknown environment with dynamic obstacles, in order to reach a predetermined goal. From the intermediate goal location, the robot had to search for and locate a control panel, move toward it, and dock in front of the panel face. The CESAR demonstration was successfully accomplished using the HERMIES-IIB robot while subsets of the CEA demonstration performed using the ARES robot simulation and animation system were presented. The first session of the workshop focused on these experimental demonstrations and on the needs and considerations for establishing benchmarks'' for testing autonomous robot control algorithms.« less

  8. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-12-26

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  9. Compensation for Unconstrained Catheter Shaft Motion in Cardiac Catheters

    PubMed Central

    Degirmenci, Alperen; Loschak, Paul M.; Tschabrunn, Cory M.; Anter, Elad; Howe, Robert D.

    2016-01-01

    Cardiac catheterization with ultrasound (US) imaging catheters provides real time US imaging from within the heart, but manually navigating a four degree of freedom (DOF) imaging catheter is difficult and requires extensive training. Existing work has demonstrated robotic catheter steering in constrained bench top environments. Closed-loop control in an unconstrained setting, such as patient vasculature, remains a significant challenge due to friction, backlash, and physiological disturbances. In this paper we present a new method for closed-loop control of the catheter tip that can accurately and robustly steer 4-DOF cardiac catheters and other flexible manipulators despite these effects. The performance of the system is demonstrated in a vasculature phantom and an in vivo porcine animal model. During bench top studies the robotic system converged to the desired US imager pose with sub-millimeter and sub-degree-level accuracy. During animal trials the system achieved 2.0 mm and 0.65° accuracy. Accurate and robust robotic navigation of flexible manipulators will enable enhanced visualization and treatment during procedures. PMID:27525170

  10. From self-assessment to frustration, a small step toward autonomy in robotic navigation

    PubMed Central

    Jauffret, Adrien; Cuperlier, Nicolas; Tarroux, Philippe; Gaussier, Philippe

    2013-01-01

    Autonomy and self-improvement capabilities are still challenging in the fields of robotics and machine learning. Allowing a robot to autonomously navigate in wide and unknown environments not only requires a repertoire of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and for monitoring strategies. Monitoring strategies requires feedbacks on the behavior's quality, from a given fitness system in order to take correct decisions. In this work, we focus on how a second-order controller can be used to (1) manage behaviors according to the situation and (2) seek for human interactions to improve skills. Following an incremental and constructivist approach, we present a generic neural architecture, based on an on-line novelty detection algorithm that may be able to self-evaluate any sensory-motor strategies. This architecture learns contingencies between sensations and actions, giving the expected sensation from the previous perception. Prediction error, coming from surprising events, provides a measure of the quality of the underlying sensory-motor contingencies. We show how a simple second-order controller (emotional system) based on the prediction progress allows the system to regulate its behavior to solve complex navigation tasks and also succeeds in asking for help if it detects dead-lock situations. We propose that this model could be a key structure toward self-assessment and autonomy. We made several experiments that can account for such properties for two different strategies (road following and place cells based navigation) in different situations. PMID:24115931

  11. Performance Characterization of a Landmark Measurement System for ARRM Terrain Relative Navigation

    NASA Technical Reports Server (NTRS)

    Shoemaker, Michael A.; Wright, Cinnamon; Liounis, Andrew J.; Getzandanner, Kenneth M.; Van Eepoel, John M.; DeWeese, Keith D.

    2016-01-01

    This paper describes the landmark measurement system being developed for terrain relative navigation on NASAs Asteroid Redirect Robotic Mission (ARRM),and the results of a performance characterization study given realistic navigational and model errors. The system is called Retina, and is derived from the stereo-photoclinometry methods widely used on other small-body missions. The system is simulated using synthetic imagery of the asteroid surface and discussion is given on various algorithmic design choices. Unlike other missions, ARRMs Retina is the first planned autonomous use of these methods during the close-proximity and descent phase of the mission.

  12. Performance Characterization of a Landmark Measurement System for ARRM Terrain Relative Navigation

    NASA Technical Reports Server (NTRS)

    Shoemaker, Michael; Wright, Cinnamon; Liounis, Andrew; Getzandanner, Kenneth; Van Eepoel, John; Deweese, Keith

    2016-01-01

    This paper describes the landmark measurement system being developed for terrain relative navigation on NASAs Asteroid Redirect Robotic Mission (ARRM),and the results of a performance characterization study given realistic navigational and model errors. The system is called Retina, and is derived from the stereophotoclinometry methods widely used on other small-body missions. The system is simulated using synthetic imagery of the asteroid surface and discussion is given on various algorithmic design choices. Unlike other missions, ARRMs Retina is the first planned autonomous use of these methods during the close-proximity and descent phase of the mission.

  13. Real-Time Hazard Detection and Avoidance Demonstration for a Planetary Lander

    NASA Technical Reports Server (NTRS)

    Epp, Chirold D.; Robertson, Edward A.; Carson, John M., III

    2014-01-01

    The Autonomous Landing Hazard Avoidance Technology (ALHAT) Project is chartered to develop and mature to a Technology Readiness Level (TRL) of six an autonomous system combining guidance, navigation and control with terrain sensing and recognition functions for crewed, cargo, and robotic planetary landing vehicles. In addition to precision landing close to a pre-mission defined landing location, the ALHAT System must be capable of autonomously identifying and avoiding surface hazards in real-time to enable a safe landing under any lighting conditions. This paper provides an overview of the recent results of the ALHAT closed loop hazard detection and avoidance flight demonstrations on the Morpheus Vertical Testbed (VTB) at the Kennedy Space Center, including results and lessons learned. This effort is also described in the context of a technology path in support of future crewed and robotic planetary exploration missions based upon the core sensing functions of the ALHAT system: Terrain Relative Navigation (TRN), Hazard Detection and Avoidance (HDA), and Hazard Relative Navigation (HRN).

  14. Virtual local target method for avoiding local minimum in potential field based robot navigation.

    PubMed

    Zou, Xi-Yong; Zhu, Jing

    2003-01-01

    A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments.

  15. Evolutionary programming-based univector field navigation method for past mobile robots.

    PubMed

    Kim, Y J; Kim, J H; Kwon, D S

    2001-01-01

    Most of navigation techniques with obstacle avoidance do not consider the robot orientation at the target position. These techniques deal with the robot position only and are independent of its orientation and velocity. To solve these problems this paper proposes a novel univector field method for fast mobile robot navigation which introduces a normalized two dimensional vector field. The method provides fast moving robots with the desired posture at the target position and obstacle avoidance. To obtain the sub-optimal vector field, a function approximator is used and trained by evolutionary programming. Two kinds of vector fields are trained, one for the final posture acquisition and the other for obstacle avoidance. Computer simulations and real experiments are carried out for a fast moving mobile robot to demonstrate the effectiveness of the proposed scheme.

  16. A 6-DOF parallel bone-grinding robot for cervical disc replacement surgery.

    PubMed

    Tian, Heqiang; Wang, Chenchen; Dang, Xiaoqing; Sun, Lining

    2017-12-01

    Artificial cervical disc replacement surgery has become an effective and main treatment method for cervical disease, which has become a more common and serious problem for people with sedentary work. To improve cervical disc replacement surgery significantly, a 6-DOF parallel bone-grinding robot is developed for cervical bone-grinding by image navigation and surgical plan. The bone-grinding robot including mechanical design and low level control is designed. The bone-grinding robot navigation is realized by optical positioning with spatial registration coordinate system defined. And a parametric robot bone-grinding plan and high level control have been developed for plane grinding for cervical top endplate and tail endplate grinding by a cylindrical grinding drill and spherical grinding for two articular surfaces of bones by a ball grinding drill. Finally, the surgical flow for a robot-assisted cervical disc replacement surgery procedure is present. The final experiments results verified the key technologies and performance of the robot-assisted surgery system concept excellently, which points out a promising clinical application with higher operability. Finally, study innovations, study limitations, and future works of this present study are discussed, and conclusions of this paper are also summarized further. This bone-grinding robot is still in the initial stage, and there are many problems to be solved from a clinical point of view. Moreover, the technique is promising and can give a good support for surgeons in future clinical work.

  17. HERMIES-I: a mobile robot for navigation and manipulation experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisbin, C.R.; Barhen, J.; de Saussure, G.

    1985-01-01

    The purpose of this paper is to report the current status of investigations ongoing at the Center for Engineering Systems Advanced Research (CESAR) in the areas of navigation and manipulation in unstructured environments. The HERMIES-I mobile robot, a prototype of a series which contains many of the major features needed for remote work in hazardous environments is discussed. Initial experimental work at CESAR has begun in the area of navigation. It briefly reviews some of the ongoing research in autonomous navigation and describes initial research with HERMIES-I and associated graphic simulation. Since the HERMIES robots will generally be composed ofmore » a variety of asynchronously controlled hardware components (such as manipulator arms, digital image sensors, sonars, etc.) it seems appropriate to consider future development of the HERMIES brain as a hypercube ensemble machine with concurrent computation and associated message passing. The basic properties of such a hypercube architecture are presented. Decision-making under uncertainty eventually permeates all of our work. Following a survey of existing analytical approaches, it was decided that a stronger theoretical basis is required. As such, this paper presents the framework for a recently developed hybrid uncertainty theory. 21 refs., 2 figs.« less

  18. Autonomous Wheeled Robot Platform Testbed for Navigation and Mapping Using Low-Cost Sensors

    NASA Astrophysics Data System (ADS)

    Calero, D.; Fernandez, E.; Parés, M. E.

    2017-11-01

    This paper presents the concept of an architecture for a wheeled robot system that helps researchers in the field of geomatics to speed up their daily research on kinematic geodesy, indoor navigation and indoor positioning fields. The presented ideas corresponds to an extensible and modular hardware and software system aimed at the development of new low-cost mapping algorithms as well as at the evaluation of the performance of sensors. The concept, already implemented in the CTTC's system ARAS (Autonomous Rover for Automatic Surveying) is generic and extensible. This means that it is possible to incorporate new navigation algorithms or sensors at no maintenance cost. Only the effort related to the development tasks required to either create such algorithms needs to be taken into account. As a consequence, change poses a much small problem for research activities in this specific area. This system includes several standalone sensors that may be combined in different ways to accomplish several goals; that is, this system may be used to perform a variety of tasks, as, for instance evaluates positioning algorithms performance or mapping algorithms performance.

  19. The navigation system of the JPL robot

    NASA Technical Reports Server (NTRS)

    Thompson, A. M.

    1977-01-01

    The control structure of the JPL research robot and the operations of the navigation subsystem are discussed. The robot functions as a network of interacting concurrent processes distributed among several computers and coordinated by a central executive. The results of scene analysis are used to create a segmented terrain model in which surface regions are classified by traversibility. The model is used by a path planning algorithm, PATH, which uses tree search methods to find the optimal path to a goal. In PATH, the search space is defined dynamically as a consequence of node testing. Maze-solving and the use of an associative data base for context dependent node generation are also discussed. Execution of a planned path is accomplished by a feedback guidance process with automatic error recovery.

  20. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1991-01-01

    A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.

  1. Navigating a Mobile Robot Across Terrain Using Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Howard, Ayanna; Bon, Bruce

    2003-01-01

    A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.

  2. Remote navigation systems in electrophysiology.

    PubMed

    Schmidt, Boris; Chun, Kyoung Ryul Julian; Tilz, Roland R; Koektuerk, Buelent; Ouyang, Feifan; Kuck, Karl-Heinz

    2008-11-01

    Today, atrial fibrillation (AF) is the dominant indication for catheter ablation in big electrophysiologists (EP) centres. AF ablation strategies are complex and technically challenging. Therefore, it would be desirable that technical innovations pursue the goal to improve catheter stability to increase the procedural success and most importantly to increase safety by helping to avoid serious complications. The most promising technical innovation aiming at the aforementioned goals is remote catheter navigation and ablation. To date, two different systems, the NIOBE magnetic navigation system (MNS, Stereotaxis, USA) and the Sensei robotic navigation system (RNS, Hansen Medical, USA), are commercially available. The following review will introduce the basic principles of the systems, will give an insight into the merits and demerits of remote navigation, and will further focus on the initial clinical experience at our centre with focus on pulmonary vein isolation (PVI) procedures.

  3. GPS/MEMS IMU/Microprocessor Board for Navigation

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K.; Chow, James; Ott, William E.

    2009-01-01

    A miniaturized instrumentation package comprising a (1) Global Positioning System (GPS) receiver, (2) an inertial measurement unit (IMU) consisting largely of surface-micromachined sensors of the microelectromechanical systems (MEMS) type, and (3) a microprocessor, all residing on a single circuit board, is part of the navigation system of a compact robotic spacecraft intended to be released from a larger spacecraft [e.g., the International Space Station (ISS)] for exterior visual inspection of the larger spacecraft. Variants of the package may also be useful in terrestrial collision-detection and -avoidance applications. The navigation solution obtained by integrating the IMU outputs is fed back to a correlator in the GPS receiver to aid in tracking GPS signals. The raw GPS and IMU data are blended in a Kalman filter to obtain an optimal navigation solution, which can be supplemented by range and velocity data obtained by use of (l) a stereoscopic pair of electronic cameras aboard the robotic spacecraft and/or (2) a laser dynamic range imager aboard the ISS. The novelty of the package lies mostly in those aspects of the design of the MEMS IMU that pertain to controlling mechanical resonances and stabilizing scale factors and biases.

  4. Remote controlled robot assisted cardiac navigation: feasibility assessment and validation in a porcine model.

    PubMed

    Ganji, Yusof; Janabi-Sharifi, Farrokh; Cheema, Asim N

    2011-12-01

    Despite the recent advances in catheter design and technology, intra-cardiac navigation during electrophysiology procedures remains challenging. Incorporation of imaging along with magnetic or robotic guidance may improve navigation accuracy and procedural safety. In the present study, the in vivo performance of a novel remote controlled Robot Assisted Cardiac Navigation System (RACN) was evaluated in a porcine model. The navigation catheter and target sensor were advanced to the right atrium using fluoroscopic and intra-cardiac echo guidance. The target sensor was positioned at three target locations in the right atrium (RA) and the navigation task was completed by an experienced physician using both manual and RACN guidance. The navigation time, final distance between the catheter tip and target sensor, and variability in final catheter tip position were determined and compared for manual and RACN guided navigation. The experiments were completed in three animals and five measurements recorded for each target location. The mean distance (mm) between catheter tip and target sensor at the end of the navigation task was significantly less using RACN guidance compared with manual navigation (5.02 ± 0.31 vs. 9.66 ± 2.88, p = 0.050 for high RA, 9.19 ± 1.13 vs. 13.0 ± 1.00, p = 0.011 for low RA and 6.77 ± 0.59 vs. 15.66 ± 2.51, p = 0.003 for tricuspid valve annulus). The average time (s) needed to complete the navigation task was significantly longer by RACN guided navigation compared with manual navigation (43.31 ± 18.19 vs. 13.54 ± 1.36, p = 0.047 for high RA, 43.71 ± 11.93 vs. 22.71 ± 3.79, p = 0.043 for low RA and 37.84 ± 3.71 vs. 16.13 ± 4.92, p = 0.003 for tricuspid valve annulus. RACN guided navigation resulted in greater consistency in performance compared with manual navigation as evidenced by lower variability in final distance measurements (0.41 vs. 0.99 mm, p = 0.04). This study demonstrated the safety and feasibility of the RACN system for cardiac navigation. The results demonstrated that RACN performed comparably with manual navigation, with improved precision and consistency for targets located in and near the right atrial chamber. Copyright © 2011 John Wiley & Sons, Ltd.

  5. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System

    PubMed Central

    Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan

    2017-01-01

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964

  6. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System.

    PubMed

    Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan

    2017-09-10

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

  7. Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion

    NASA Astrophysics Data System (ADS)

    Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger

    2007-12-01

    Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.

  8. Real-time simulation for intra-operative navigation in robotic surgery. Using a mass spring system for a basic study of organ deformation.

    PubMed

    Kawamura, Kazuya; Kobayashi, Yo; Fujie, Masakatsu G

    2007-01-01

    Medical technology has advanced with the introduction of robot technology, making previous medical treatments that were very difficult far more possible. However, operation of a surgical robot demands substantial training and continual practice on the part of the surgeon because it requires difficult techniques that are different from those of traditional surgical procedures. We focused on a simulation technology based on the physical characteristics of organs. In this research, we proposed the development of surgical simulation, based on a physical model, for intra-operative navigation by a surgeon. In this paper, we describe the design of our system, in particular our organ deformation calculator. The proposed simulation system consists of an organ deformation calculator and virtual slave manipulators. We obtained adequate experimental results of a target node at a nearby point of interaction, because this point ensures better accuracy for our simulation model. The next research step would be to focus on a surgical environment in which internal organ models would be integrated into a slave simulation system.

  9. Development of a Commercially Viable, Modular Autonomous Robotic Systems for Converting any Vehicle to Autonomous Control

    NASA Technical Reports Server (NTRS)

    Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.

    1994-01-01

    A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.

  10. Solar-based navigation for robotic explorers

    NASA Astrophysics Data System (ADS)

    Shillcutt, Kimberly Jo

    2000-12-01

    This thesis introduces the application of solar position and shadowing information to robotic exploration. Power is a critical resource for robots with remote, long-term missions, so this research focuses on the power generation capabilities of robotic explorers during navigational tasks, in addition to power consumption. Solar power is primarily considered, with the possibility of wind power also contemplated. Information about the environment, including the solar ephemeris, terrain features, time of day, and surface location, is incorporated into a planning structure, allowing robots to accurately predict shadowing and thus potential costs and gains during navigational tasks. By evaluating its potential to generate and expend power, a robot can extend its lifetime and accomplishments. The primary tasks studied are coverage patterns, with a variety of plans developed for this research. The use of sun, terrain and temporal information also enables new capabilities of identifying and following sun-synchronous and sun-seeking paths. Digital elevation maps are combined with an ephemeris algorithm to calculate the altitude and azimuth of the sun from surface locations, and to identify and map shadows. Solar navigation path simulators use this information to perform searches through two-dimensional space, while considering temporal changes. Step by step simulations of coverage patterns also incorporate time in addition to location. Evaluations of solar and wind power generation, power consumption, area coverage, area overlap, and time are generated for sets of coverage patterns, with on-board environmental information linked to the simulations. This research is implemented on the Nomad robot for the Robotic Antarctic Meteorite Search. Simulators have been developed for coverage pattern tests, as well as for sun-synchronous and sun-seeking path searches. Results of field work and simulations are reported and analyzed, with demonstrated improvements in efficiency, productivity and lifetime of robotic explorers, along with new solar navigation abilities.

  11. Development of autonomous grasping and navigating robot

    NASA Astrophysics Data System (ADS)

    Kudoh, Hiroyuki; Fujimoto, Keisuke; Nakayama, Yasuichi

    2015-01-01

    The ability to find and grasp target items in an unknown environment is important for working robots. We developed an autonomous navigating and grasping robot. The operations are locating a requested item, moving to where the item is placed, finding the item on a shelf or table, and picking the item up from the shelf or the table. To achieve these operations, we designed the robot with three functions: an autonomous navigating function that generates a map and a route in an unknown environment, an item position recognizing function, and a grasping function. We tested this robot in an unknown environment. It achieved a series of operations: moving to a destination, recognizing the positions of items on a shelf, picking up an item, placing it on a cart with its hand, and returning to the starting location. The results of this experiment show the applicability of reducing the workforce with robots.

  12. Evaluation of a completely robotized neurosurgical operating microscope.

    PubMed

    Kantelhardt, Sven R; Finke, Markus; Schweikard, Achim; Giese, Alf

    2013-01-01

    Operating microscopes are essential for most neurosurgical procedures. Modern robot-assisted controls offer new possibilities, combining the advantages of conventional and automated systems. We evaluated the prototype of a completely robotized operating microscope with an integrated optical coherence tomography module. A standard operating microscope was fitted with motors and control instruments, with the manual control mode and balance preserved. In the robot mode, the microscope was steered by a remote control that could be fixed to a surgical instrument. External encoders and accelerometers tracked microscope movements. The microscope was additionally fitted with an optical coherence tomography-scanning module. The robotized microscope was tested on model systems. It could be freely positioned, without forcing the surgeon to take the hands from the instruments or avert the eyes from the oculars. Positioning error was about 1 mm, and vibration faded in 1 second. Tracking of microscope movements, combined with an autofocus function, allowed determination of the focus position within the 3-dimensional space. This constituted a second loop of navigation independent from conventional infrared reflector-based techniques. In the robot mode, automated optical coherence tomography scanning of large surface areas was feasible. The prototype of a robotized optical coherence tomography-integrated operating microscope combines the advantages of a conventional manually controlled operating microscope with a remote-controlled positioning aid and a self-navigating microscope system that performs automated positioning tasks such as surface scans. This demonstrates that, in the future, operating microscopes may be used to acquire intraoperative spatial data, volume changes, and structural data of brain or brain tumor tissue.

  13. Electrical power technology for robotic planetary rovers

    NASA Technical Reports Server (NTRS)

    Bankston, C. P.; Shirbacheh, M.; Bents, D. J.; Bozek, J. M.

    1993-01-01

    Power technologies which will enable a range of robotic rover vehicle missions by the end of the 1990s and beyond are discussed. The electrical power system is the most critical system for reliability and life, since all other on board functions (mobility, navigation, command and data, communications, and the scientific payload instruments) require electrical power. The following are discussed: power generation, energy storage, power management and distribution, and thermal management.

  14. Learning for autonomous navigation : extrapolating from underfoot to the far field

    NASA Technical Reports Server (NTRS)

    Matthies, Larry; Turmon, Michael; Howard, Andrew; Angelova, Anelia; Tang, Benyang; Mjolsness, Eric

    2005-01-01

    Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of 3-D sensors and by the difficulty of preprogramming systems to understand the traversability of the wide variety of terrain they can encounter. Enabling robots to learn from experience may alleviate both of these problems. We define two paradigms for this, learning from 3-D geometry and learning from proprioception, and describe initial instantiations of them we have developed under DARPA and NASA programs. Field test results show promise for learning traversability of vegetated terrain, learning to extend the lookahead range of the vision system, and learning how slip varies with slope.

  15. Rendezvous Integration Complexities of NASA Human Flight Vehicles

    NASA Technical Reports Server (NTRS)

    Brazzel, Jack P.; Goodman, John L.

    2009-01-01

    Propellant-optimal trajectories, relative sensors and navigation, and docking/capture mechanisms are rendezvous disciplines that receive much attention in the technical literature. However, other areas must be considered. These include absolute navigation, maneuver targeting, attitude control, power generation, software development and verification, redundancy management, thermal control, avionics integration, robotics, communications, lighting, human factors, crew timeline, procedure development, orbital debris risk mitigation, structures, plume impingement, logistics, and in some cases extravehicular activity. While current and future spaceflight programs will introduce new technologies and operations concepts, the complexity of integrating multiple systems on multiple spacecraft will remain. The systems integration task may become more difficult as increasingly complex software is used to meet current and future automation, autonomy, and robotic operation requirements.

  16. Approach-Phase Precision Landing with Hazard Relative Navigation: Terrestrial Test Campaign Results of the Morpheus/ALHAT Project

    NASA Technical Reports Server (NTRS)

    Crain, Timothy P.; Bishop, Robert H.; Carson, John M., III; Trawny, Nikolas; Hanak, Chad; Sullivan, Jacob; Christian, John; DeMars, Kyle; Campbell, Tom; Getchius, Joel

    2016-01-01

    The Morpheus Project began in late 2009 as an ambitious e ort code-named Project M to integrate three ongoing multi-center NASA technology developments: humanoid robotics, liquid oxygen/liquid methane (LOX/LCH4) propulsion and Autonomous Precision Landing and Hazard Avoidance Technology (ALHAT) into a single engineering demonstration mission to be own to the Moon by 2013. The humanoid robot e ort was redirected to a deploy- ment of Robonaut 2 on the International Space Station in February of 2011 while Morpheus continued as a terrestrial eld test project integrating the existing ALHAT Project's tech- nologies into a sub-orbital ight system using the world's rst LOX/LCH4 main propulsion and reaction control system fed from the same blowdown tanks. A series of 33 tethered tests with the Morpheus 1.0 vehicle and Morpheus 1.5 vehicle were conducted from April 2011 - December 2013 before successful, sustained free ights with the primary Vertical Testbed (VTB) navigation con guration began with Free Flight 3 on December 10, 2013. Over the course of the following 12 free ights and 3 tethered ights, components of the ALHAT navigation system were integrated into the Morpheus vehicle, operations, and ight control loop. The ALHAT navigation system was integrated and run concurrently with the VTB navigation system as a reference and fail-safe option in ight (see touchdown position esti- mate comparisons in Fig. 1). Flight testing completed with Free Flight 15 on December 15, 2014 with a completely autonomous Hazard Detection and Avoidance (HDA), integration of surface relative and Hazard Relative Navigation (HRN) measurements into the onboard dual-state inertial estimator Kalman lter software, and landing within 2 meters of the VTB GPS-based navigation solution at the safe landing site target. This paper describes the Mor- pheus joint VTB/ALHAT navigation architecture, the sensors utilized during the terrestrial ight campaign, issues resolved during testing, and the navigation results from the ight tests.

  17. Integrated network architecture for sustained human and robotic exploration

    NASA Technical Reports Server (NTRS)

    Noreen, Gary K.; Cesarone, Robert; Deutsch, Leslie; Edwards, Charlie; Soloff, Jason; Ely, Todd; Cook, Brian; Morabito, David; Hemmati, Hamid; Piazzolla, Sabino; hide

    2005-01-01

    The National Aeronautics and Space Administration (NASA) Exploration Systems Mission Directorate is planning a series of human and robotic missions to the Earth's moon and to Mars. These missions will require telecommunication and navigation services. This paper sets forth presumed requirements for such services and presents strawman lunar and Mars telecommunications network architectures to satisfy the presumed requirements.

  18. Perception-action map learning in controlled multiscroll systems applied to robot navigation.

    PubMed

    Arena, Paolo; De Fiore, Sebastiano; Fortuna, Luigi; Patané, Luca

    2008-12-01

    In this paper a new technique for action-oriented perception in robots is presented. The paper starts from exploiting the successful implementation of the basic idea that perceptual states can be embedded into chaotic attractors whose dynamical evolution can be associated with sensorial stimuli. In this way, it can be possible to encode, into the chaotic dynamics, environment-dependent patterns. These have to be suitably linked to an action, executed by the robot, to fulfill an assigned mission. This task is addressed here: the action-oriented perception loop is closed by introducing a simple unsupervised learning stage, implemented via a bio-inspired structure based on the motor map paradigm. In this way, perceptual meanings, useful for solving a given task, can be autonomously learned, based on the environment-dependent patterns embedded into the controlled chaotic dynamics. The presented framework has been tested on a simulated robot and the performance have been successfully compared with other traditional navigation control paradigms. Moreover an implementation of the proposed architecture on a Field Programmable Gate Array is briefly outlined and preliminary experimental results on a roving robot are also reported.

  19. UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment.

    PubMed

    Chen, Jessie Y C

    2010-08-01

    A military reconnaissance environment was simulated to examine the performance of ground robotics operators who were instructed to utilise streaming video from an unmanned aerial vehicle (UAV) to navigate his/her ground robot to the locations of the targets. The effects of participants' spatial ability on their performance and workload were also investigated. Results showed that participants' overall performance (speed and accuracy) was better when she/he had access to images from larger UAVs with fixed orientations, compared with other UAV conditions (baseline- no UAV, micro air vehicle and UAV with orbiting views). Participants experienced the highest workload when the UAV was orbiting. Those individuals with higher spatial ability performed significantly better and reported less workload than those with lower spatial ability. The results of the current study will further understanding of ground robot operators' target search performance based on streaming video from UAVs. The results will also facilitate the implementation of ground/air robots in military environments and will be useful to the future military system design and training community.

  20. Concurrent planning and execution for a walking robot

    NASA Astrophysics Data System (ADS)

    Simmons, Reid

    1990-07-01

    The Planetary Rover project is developing the Ambler, a novel legged robot, and an autonomous software system for walking the Ambler over rough terrain. As part of the project, we have developed a system that integrates perception, planning, and real-time control to navigate a single leg of the robot through complex obstacle courses. The system is integrated using the Task Control Architecture (TCA), a general-purpose set of utilities for building and controlling distributed mobile robot systems. The walking system, as originally implemented, utilized a sequential sense-plan-act control cycle. This report describes efforts to improve the performance of the system by concurrently planning and executing steps. Concurrency was achieved by modifying the existing sequential system to utilize TCA features such as resource management, monitors, temporal constraints, and hierarchical task trees. Performance was increased in excess of 30 percent with only a relatively modest effort to convert and test the system. The results lend support to the utility of using TCA to develop complex mobile robot systems.

  1. Memristive device based learning for navigation in robots.

    PubMed

    Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A

    2017-11-08

    Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.

  2. Solving Navigational Uncertainty Using Grid Cells on Robots

    PubMed Central

    Milford, Michael J.; Wiles, Janet; Wyeth, Gordon F.

    2010-01-01

    To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our study to be a starting point for animal experiments that test navigation in perceptually ambiguous environments. PMID:21085643

  3. Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots

    PubMed Central

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-01-01

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933

  4. Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.

    PubMed

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-10-21

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.

  5. Afocal optical flow sensor for reducing vertical height sensitivity in indoor robot localization and navigation.

    PubMed

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan

    2015-05-13

    This paper introduces a novel afocal optical flow sensor (OFS) system for odometry estimation in indoor robotic navigation. The OFS used in computer optical mouse has been adopted for mobile robots because it is not affected by wheel slippage. Vertical height variance is thought to be a dominant factor in systematic error when estimating moving distances in mobile robots driving on uneven surfaces. We propose an approach to mitigate this error by using an afocal (infinite effective focal length) system. We conducted experiments in a linear guide on carpet and three other materials with varying sensor heights from 30 to 50 mm and a moving distance of 80 cm. The same experiments were repeated 10 times. For the proposed afocal OFS module, a 1 mm change in sensor height induces a 0.1% systematic error; for comparison, the error for a conventional fixed-focal-length OFS module is 14.7%. Finally, the proposed afocal OFS module was installed on a mobile robot and tested 10 times on a carpet for distances of 1 m. The average distance estimation error and standard deviation are 0.02% and 17.6%, respectively, whereas those for a conventional OFS module are 4.09% and 25.7%, respectively.

  6. Flexible robotics: a new paradigm.

    PubMed

    Aron, Monish; Haber, Georges-Pascal; Desai, Mihir M; Gill, Inderbir S

    2007-05-01

    The use of robotics in urologic surgery has seen exponential growth over the last 5 years. Existing surgical robots operate rigid instruments on the master/slave principle and currently allow extraluminal manipulations and surgical procedures. Flexible robotics is an entirely novel paradigm. This article explores the potential of flexible robotic platforms that could permit endoluminal and transluminal surgery in the future. Computerized catheter-control systems are being developed primarily for cardiac applications. This development is driven by the need for precise positioning and manipulation of the catheter tip in the three-dimensional cardiovascular space. Such systems employ either remote navigation in a magnetic field or a computer-controlled electromechanical flexible robotic system. We have adapted this robotic system for flexible ureteropyeloscopy and have to date completed the initial porcine studies. Flexible robotics is on the horizon. It has potential for improved scope-tip precision, superior operative ergonomics, and reduced occupational radiation exposure. In the near future, in urology, we believe that it holds promise for endoluminal therapeutic ureterorenoscopy. Looking further ahead, within the next 3-5 years, it could enable transluminal surgery.

  7. On-Board Perception System For Planetary Aerobot Balloon Navigation

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Scheid, Robert E.; T. Salomon, Phil

    1996-01-01

    NASA's Jet Propulsion Laboratory is implementing the Planetary Aerobot Testbed to develop the technology needed to operate a robotic balloon aero-vehicle (Aerobot). This earth-based system would be the precursor for aerobots designed to explore Venus, Mars, Titan and other gaseous planetary bodies. The on-board perception system allows the aerobot to localize itself and navigate on a planet using information derived from a variety of celestial, inertial, ground-imaging, ranging, and radiometric sensors.

  8. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.

    PubMed

    Wen, Rong; Tay, Wei-Liang; Nguyen, Binh P; Chng, Chin-Boon; Chui, Chee-Kong

    2014-09-01

    Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Experiences with a Barista Robot, FusionBot

    NASA Astrophysics Data System (ADS)

    Limbu, Dilip Kumar; Tan, Yeow Kee; Wong, Chern Yuen; Jiang, Ridong; Wu, Hengxin; Li, Liyuan; Kah, Eng Hoe; Yu, Xinguo; Li, Dong; Li, Haizhou

    In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.

  10. Google glass-based remote control of a mobile robot

    NASA Astrophysics Data System (ADS)

    Yu, Song; Wen, Xi; Li, Wei; Chen, Genshe

    2016-05-01

    In this paper, we present an approach to remote control of a mobile robot via a Google Glass with the multi-function and compact size. This wearable device provides a new human-machine interface (HMI) to control a robot without need for a regular computer monitor because the Google Glass micro projector is able to display live videos around robot environments. In doing it, we first develop a protocol to establish WI-FI connection between Google Glass and a robot and then implement five types of robot behaviors: Moving Forward, Turning Left, Turning Right, Taking Pause, and Moving Backward, which are controlled by sliding and clicking the touchpad located on the right side of the temple. In order to demonstrate the effectiveness of the proposed Google Glass-based remote control system, we navigate a virtual Surveyor robot to pass a maze. Experimental results demonstrate that the proposed control system achieves the desired performance.

  11. Design, Development and Testing of the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) Guidance, Navigation and Control System

    NASA Technical Reports Server (NTRS)

    Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.

    2003-01-01

    Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.

  12. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  13. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    PubMed Central

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  14. Adaptive Control for Autonomous Navigation of Mobile Robots Considering Time Delay and Uncertainty

    NASA Astrophysics Data System (ADS)

    Armah, Stephen Kofi

    Autonomous control of mobile robots has attracted considerable attention of researchers in the areas of robotics and autonomous systems during the past decades. One of the goals in the field of mobile robotics is development of platforms that robustly operate in given, partially unknown, or unpredictable environments and offer desired services to humans. Autonomous mobile robots need to be equipped with effective, robust and/or adaptive, navigation control systems. In spite of enormous reported work on autonomous navigation control systems for mobile robots, achieving the goal above is still an open problem. Robustness and reliability of the controlled system can always be improved. The fundamental issues affecting the stability of the control systems include the undesired nonlinear effects introduced by actuator saturation, time delay in the controlled system, and uncertainty in the model. This research work develops robustly stabilizing control systems by investigating and addressing such nonlinear effects through analytical, simulations, and experiments. The control systems are designed to meet specified transient and steady-state specifications. The systems used for this research are ground (Dr Robot X80SV) and aerial (Parrot AR.Drone 2.0) mobile robots. Firstly, an effective autonomous navigation control system is developed for X80SV using logic control by combining 'go-to-goal', 'avoid-obstacle', and 'follow-wall' controllers. A MATLAB robot simulator is developed to implement this control algorithm and experiments are conducted in a typical office environment. The next stage of the research develops an autonomous position (x, y, and z) and attitude (roll, pitch, and yaw) controllers for a quadrotor, and PD-feedback control is used to achieve stabilization. The quadrotor's nonlinear dynamics and kinematics are implemented using MATLAB S-function to generate the state output. Secondly, the white-box and black-box approaches are used to obtain a linearized second-order altitude models for the quadrotor, AR.Drone 2.0. Proportional (P), pole placement or proportional plus velocity (PV), linear quadratic regulator (LQR), and model reference adaptive control (MRAC) controllers are designed and validated through simulations using MATLAB/Simulink. Control input saturation and time delay in the controlled systems are also studied. MATLAB graphical user interface (GUI) and Simulink programs are developed to implement the controllers on the drone. Thirdly, the time delay in the drone's control system is estimated using analytical and experimental methods. In the experimental approach, the transient properties of the experimental altitude responses are compared to those of simulated responses. The analytical approach makes use of the Lambert W function to obtain analytical solutions of scalar first-order delay differential equations (DDEs). A time-delayed P-feedback control system (retarded type) is used in estimating the time delay. Then an improved system performance is obtained by incorporating the estimated time delay in the design of the PV control system (neutral type) and PV-MRAC control system. Furthermore, the stability of a parametric perturbed linear time-invariant (LTI) retarded-type system is studied. This is done by analytically calculating the stability radius of the system. Simulation of the control system is conducted to confirm the stability. This robust control design and uncertainty analysis are conducted for first-order and second-order quadrotor models. Lastly, the robustly designed PV and PV-MRAC control systems are used to autonomously track multiple waypoints. Also, the robustness of the PV-MRAC controller is tested against a baseline PV controller using the payload capability of the drone. It is shown that the PV-MRAC offers several benefits over the fixed-gain approach of the PV controller. The adaptive control is found to offer enhanced robustness to the payload fluctuations.

  15. Control of free-flying space robot manipulator systems

    NASA Technical Reports Server (NTRS)

    Cannon, Robert H., Jr.

    1990-01-01

    New control techniques for self contained, autonomous free flying space robots were developed and tested experimentally. Free flying robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require human extravehicular activity (EVA). A set of research projects were developed and carried out using lab models of satellite robots and a flexible manipulator. The second generation space robot models use air cushion vehicle (ACV) technology to simulate in 2-D the drag free, zero g conditions of space. The current work is divided into 5 major projects: Global Navigation and Control of a Free Floating Robot, Cooperative Manipulation from a Free Flying Robot, Multiple Robot Cooperation, Thrusterless Robotic Locomotion, and Dynamic Payload Manipulation. These projects are examined in detail.

  16. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    NASA Astrophysics Data System (ADS)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  17. Towards a sustainable modular robot system for planetary exploration

    NASA Astrophysics Data System (ADS)

    Hossain, S. G. M.

    This thesis investigates multiple perspectives of developing an unmanned robotic system suited for planetary terrains. In this case, the unmanned system consists of unit-modular robots. This type of robot has potential to be developed and maintained as a sustainable multi-robot system while located far from direct human intervention. Some characteristics that make this possible are: the cooperation, communication and connectivity among the robot modules, flexibility of individual robot modules, capability of self-healing in the case of a failed module and the ability to generate multiple gaits by means of reconfiguration. To demonstrate the effects of high flexibility of an individual robot module, multiple modules of a four-degree-of-freedom unit-modular robot were developed. The robot was equipped with a novel connector mechanism that made self-healing possible. Also, design strategies included the use of series elastic actuators for better robot-terrain interaction. In addition, various locomotion gaits were generated and explored using the robot modules, which is essential for a modular robot system to achieve robustness and thus successfully navigate and function in a planetary environment. To investigate multi-robot task completion, a biomimetic cooperative load transportation algorithm was developed and simulated. Also, a liquid motion-inspired theory was developed consisting of a large number of robot modules. This can be used to traverse obstacles that inevitably occur in maneuvering over rough terrains such as in a planetary exploration. Keywords: Modular robot, cooperative robots, biomimetics, planetary exploration, sustainability.

  18. Toward a practical mobile robotic aid system for people with severe physical disabilities.

    PubMed

    Regalbuto, M A; Krouskop, T A; Cheatham, J B

    1992-01-01

    A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.

  19. Tracking Control of Mobile Robots Localized via Chained Fusion of Discrete and Continuous Epipolar Geometry, IMU and Odometry.

    PubMed

    Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas

    2013-08-01

    This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.

  20. The Design and Implementation of a Semi-Autonomous Surf-Zone Robot Using Advanced Sensors and a Common Robot Operating System

    DTIC Science & Technology

    2011-06-01

    effective way- point navigation algorithm that interfaced with a Java based graphical user interface (GUI), written by Uzun, for a robot named Bender [2...the angular acceleration, θ̈, or angular rate, θ̇. When considering a joint driven by an electric motor, the inertia and friction can be divided into...interactive simulations that can receive input from user controls, scripts , and other applications, such as Excel and MATLAB. One drawback is that the

  1. Control of free-flying space robot manipulator systems

    NASA Technical Reports Server (NTRS)

    Cannon, Robert H., Jr.

    1988-01-01

    The focus of the work is to develop and perform a set of research projects using laboratory models of satellite robots. These devices use air cushion technology to simulate in two dimensions the drag-free, zero-g conditions of space. Five research areas are examined: cooperative manipulation on a fixed base; cooperative manipulation on a free-floating base; global navigation and control of a free-floating robot; an alternative transport mode call Locomotion Enhancement via Arm Push-Off (LEAP), and adaptive control of LEAP.

  2. Recursive Gradient Estimation Using Splines for Navigation of Autonomous Vehicles.

    DTIC Science & Technology

    1985-07-01

    AUTONOMOUS VEHICLES C. N. SHEN DTIC " JULY 1985 SEP 1 219 85 V US ARMY ARMAMENT RESEARCH AND DEVELOPMENT CENTER LARGE CALIBER WEAPON SYSTEMS LABORATORY I...GRADIENT ESTIMATION USING SPLINES FOR NAVIGATION OF AUTONOMOUS VEHICLES Final S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(q) 8. CONTRACT OR GRANT NUMBER...which require autonomous vehicles . Essential to these robotic vehicles is an adequate and efficient computer vision system. A potentially more

  3. Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments

    PubMed Central

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2013-01-01

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604

  4. Self-organized multi-camera network for a fast and easy deployment of ubiquitous robots in unknown environments.

    PubMed

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2012-12-27

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.

  5. Autonomous mobile robot teams

    NASA Technical Reports Server (NTRS)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  6. A design strategy for autonomous systems

    NASA Technical Reports Server (NTRS)

    Forster, Pete

    1989-01-01

    Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.

  7. ALHAT COBALT: CoOperative Blending of Autonomous Landing Technology

    NASA Technical Reports Server (NTRS)

    Carson, John M.

    2015-01-01

    The COBALT project is a flight demonstration of two NASA ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) capabilities that are key for future robotic or human landing GN&C (Guidance, Navigation and Control) systems. The COBALT payload integrates the Navigation Doppler Lidar (NDL) for ultraprecise velocity and range measurements with the Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. Terrestrial flight tests of the COBALT payload in an open-loop and closed-loop GN&C configuration will be conducted onboard a commercial, rocket-propulsive Vertical Test Bed (VTB) at a test range in Mojave, CA.

  8. Heterogeneous Multi-Robot System for Mapping Environmental Variables of Greenhouses

    PubMed Central

    Roldán, Juan Jesús; Garcia-Aunon, Pablo; Garzón, Mario; de León, Jorge; del Cerro, Jaime; Barrientos, Antonio

    2016-01-01

    The productivity of greenhouses highly depends on the environmental conditions of crops, such as temperature and humidity. The control and monitoring might need large sensor networks, and as a consequence, mobile sensory systems might be a more suitable solution. This paper describes the application of a heterogeneous robot team to monitor environmental variables of greenhouses. The multi-robot system includes both ground and aerial vehicles, looking to provide flexibility and improve performance. The multi-robot sensory system measures the temperature, humidity, luminosity and carbon dioxide concentration in the ground and at different heights. Nevertheless, these measurements can be complemented with other ones (e.g., the concentration of various gases or images of crops) without a considerable effort. Additionally, this work addresses some relevant challenges of multi-robot sensory systems, such as the mission planning and task allocation, the guidance, navigation and control of robots in greenhouses and the coordination among ground and aerial vehicles. This work has an eminently practical approach, and therefore, the system has been extensively tested both in simulations and field experiments. PMID:27376297

  9. COBALT: A GN&C Payload for Testing ALHAT Capabilities in Closed-Loop Terrestrial Rocket Flights

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Amzajerdian, Farzin; Hines, Glenn D.; O'Neal, Travis V.; Robertson, Edward A.; Seubert, Carl; Trawny, Nikolas

    2016-01-01

    The COBALT (CoOperative Blending of Autonomous Landing Technology) payload is being developed within NASA as a risk reduction activity to mature, integrate and test ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) systems targeted for infusion into near-term robotic and future human space flight missions. The initial COBALT payload instantiation is integrating the third-generation ALHAT Navigation Doppler Lidar (NDL) sensor, for ultra high-precision velocity plus range measurements, with the passive-optical Lander Vision System (LVS) that provides Terrain Relative Navigation (TRN) global-position estimates. The COBALT payload will be integrated onboard a rocket-propulsive terrestrial testbed and will provide precise navigation estimates and guidance planning during two flight test campaigns in 2017 (one open-loop and closed- loop). The NDL is targeting performance capabilities desired for future Mars and Moon Entry, Descent and Landing (EDL). The LVS is already baselined for TRN on the Mars 2020 robotic lander mission. The COBALT platform will provide NASA with a new risk-reduction capability to test integrated EDL Guidance, Navigation and Control (GN&C) components in closed-loop flight demonstrations prior to the actual mission EDL.

  10. Navigable points estimation for mobile robots using binary image skeletonization

    NASA Astrophysics Data System (ADS)

    Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman

    2017-02-01

    This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.

  11. A biomimetic vision-based hovercraft accounts for bees' complex behaviour in various corridors.

    PubMed

    Roubieu, Frédéric L; Serres, Julien R; Colonnier, Fabien; Franceschini, Nicolas; Viollet, Stéphane; Ruffier, Franck

    2014-09-01

    Here we present the first systematic comparison between the visual guidance behaviour of a biomimetic robot and those of honeybees flying in similar environments. We built a miniature hovercraft which can travel safely along corridors with various configurations. For the first time, we implemented on a real physical robot the 'lateral optic flow regulation autopilot', which we previously studied computer simulations. This autopilot inspired by the results of experiments on various species of hymenoptera consists of two intertwined feedback loops, the speed and lateral control loops, each of which has its own optic flow (OF) set-point. A heading-lock system makes the robot move straight ahead as fast as 69 cm s(-1) with a clearance from one wall as small as 31 cm, giving an unusually high translational OF value (125° s(-1)). Our biomimetic robot was found to navigate safely along straight, tapered and bent corridors, and to react appropriately to perturbations such as the lack of texture on one wall, the presence of a tapering or non-stationary section of the corridor and even a sloping terrain equivalent to a wind disturbance. The front end of the visual system consists of only two local motion sensors (LMS), one on each side. This minimalistic visual system measuring the lateral OF suffices to control both the robot's forward speed and its clearance from the walls without ever measuring any speeds or distances. We added two additional LMSs oriented at +/-45° to improve the robot's performances in stiffly tapered corridors. The simple control system accounts for worker bees' ability to navigate safely in six challenging environments: straight corridors, single walls, tapered corridors, straight corridors with part of one wall moving or missing, as well as in the presence of wind.

  12. Autonomous exploration and mapping of unknown environments

    NASA Astrophysics Data System (ADS)

    Owens, Jason; Osteen, Phil; Fields, MaryAnne

    2012-06-01

    Autonomous exploration and mapping is a vital capability for future robotic systems expected to function in arbitrary complex environments. In this paper, we describe an end-to-end robotic solution for remotely mapping buildings. For a typical mapping system, an unmanned system is directed to enter an unknown building at a distance, sense the internal structure, and, barring additional tasks, while in situ, create a 2-D map of the building. This map provides a useful and intuitive representation of the environment for the remote operator. We have integrated a robust mapping and exploration system utilizing laser range scanners and RGB-D cameras, and we demonstrate an exploration and metacognition algorithm on a robotic platform. The algorithm allows the robot to safely navigate the building, explore the interior, report significant features to the operator, and generate a consistent map - all while maintaining localization.

  13. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    NASA Astrophysics Data System (ADS)

    Hanford, Scott D.

    Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.

  14. Theseus: tethered distributed robotics (TDR)

    NASA Astrophysics Data System (ADS)

    Digney, Bruce L.; Penzes, Steven G.

    2003-09-01

    The Defence Research and Development Canada's (DRDC) Autonomous Intelligent System's program conducts research to increase the independence and effectiveness of military vehicles and systems. DRDC-Suffield's Autonomous Land Systems (ALS) is creating new concept vehicles and autonomous control systems for use in outdoor areas, urban streets, urban interiors and urban subspaces. This paper will first give an overview of the ALS program and then give a specific description of the work being done for mobility in urban subspaces. Discussed will be the Theseus: Thethered Distributed Robotics (TDR) system, which will not only manage an unavoidable tether but exploit it for mobility and navigation. Also discussed will be the prototype robot called the Hedgehog, which uses conformal 3D mobility in ducts, sewer pipes, collapsed rubble voids and chimneys.

  15. Research in mobile robotics at ORNL/CESAR (Oak Ridge National Laboratory/Center for Engineering Systems Advanced Research)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, R.C.; Weisbin, C.R.; Pin, F.G.

    1989-01-01

    This paper reviews ongoing and planned research with mobile autonomous robots at the Oak Ridge National Laboratory (ORNL), Center for Engineering Systems Advanced Research (CESAR). Specifically we report on results obtained with the robot HERMIES-IIB in navigation, intelligent sensing, learning, and on-board parallel computing in support of these functions. We briefly summarize an experiment with HERMIES-IIB that demonstrates the capability of smooth transitions between robot autonomy and tele-operation. This experiment results from collaboration among teams at the Universities of Florida, Michigan, Tennessee, and Texas; and ORNL in a program targeted at robotics for advanced nuclear power stations. We conclude bymore » summarizing ongoing R D with our new mobile robot HERMIES-III which is equipped with a seven degree-of-freedom research manipulator arm. 12 refs., 4 figs.« less

  16. Mapping of unknown industrial plant using ROS-based navigation mobile robot

    NASA Astrophysics Data System (ADS)

    Priyandoko, G.; Ming, T. Y.; Achmad, M. S. H.

    2017-10-01

    This research examines how humans work with teleoperated unmanned mobile robot inspection in industrial plant area resulting 2D/3D map for further critical evaluation. This experiment focuses on two parts, the way human-robot doing remote interactions using robust method and the way robot perceives the environment surround as a 2D/3D perspective map. ROS (robot operating system) as a tool was utilized in the development and implementation during the research which comes up with robust data communication method in the form of messages and topics. RGBD SLAM performs the visual mapping function to construct 2D/3D map using Kinect sensor. The results showed that the mobile robot-based teleoperated system are successful to extend human perspective in term of remote surveillance in large area of industrial plant. It was concluded that the proposed work is robust solution for large mapping within an unknown construction building.

  17. HERMIES-3: A step toward autonomous mobility, manipulation, and perception

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.

    1989-01-01

    HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.

  18. Survey of computer vision technology for UVA navigation

    NASA Astrophysics Data System (ADS)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.

  19. Dual-body magnetic helical robot for drilling and cargo delivery in human blood vessels

    NASA Astrophysics Data System (ADS)

    Lee, Wonseo; Jeon, Seungmun; Nam, Jaekwang; Jang, Gunhee

    2015-05-01

    We propose a novel dual-body magnetic helical robot (DMHR) manipulated by a magnetic navigation system. The proposed DMHR can generate helical motions to navigate in human blood vessels and to drill blood clots by an external rotating magnetic field. It can also generate release motions which are relative rotational motions between dual-bodies to release the carrying cargos to a target region by controlling the magnitude of an external magnetic field. Constraint equations were derived to selectively manipulate helical and release motions by controlling external magnetic fields. The DMHR was prototyped and various experiments were conducted to demonstrate its motions and verify its manipulation methods.

  20. Real-time visual mosaicking and navigation on the seafloor

    NASA Astrophysics Data System (ADS)

    Richmond, Kristof

    Remote robotic exploration holds vast potential for gaining knowledge about extreme environments accessible to humans only with great difficulty. Robotic explorers have been sent to other solar system bodies, and on this planet into inaccessible areas such as caves and volcanoes. In fact, the largest unexplored land area on earth lies hidden in the airless cold and intense pressure of the ocean depths. Exploration in the oceans is further hindered by water's high absorption of electromagnetic radiation, which both inhibits remote sensing from the surface, and limits communications with the bottom. The Earth's oceans thus provide an attractive target for developing remote exploration capabilities. As a result, numerous robotic vehicles now routinely survey this environment, from remotely operated vehicles piloted over tethers from the surface to torpedo-shaped autonomous underwater vehicles surveying the mid-waters. However, these vehicles are limited in their ability to navigate relative to their environment. This limits their ability to return to sites with precision without the use of external navigation aids, and to maneuver near and interact with objects autonomously in the water and on the sea floor. The enabling of environment-relative positioning on fully autonomous underwater vehicles will greatly extend their power and utility for remote exploration in the furthest reaches of the Earth's waters---even under ice and under ground---and eventually in extraterrestrial liquid environments such as Europa's oceans. This thesis presents an operational, fielded system for visual navigation of underwater robotic vehicles in unexplored areas of the seafloor. The system does not depend on external sensing systems, using only instruments on board the vehicle. As an area is explored, a camera is used to capture images and a composite view, or visual mosaic, of the ocean bottom is created in real time. Side-to-side visual registration of images is combined with dead-reckoned navigation information in a framework allowing the creation and updating of large, locally consistent mosaics. These mosaics are used as maps in which the vehicle can navigate and localize itself with respect to points in the environment. The system achieves real-time performance in several ways. First, wherever possible, direct sensing of motion parameters is used in place of extracting them from visual data. Second, trajectories are chosen to enable a hierarchical search for side-to-side links which limits the amount of searching performed without sacrificing robustness. Finally, the map estimation is formulated as a sparse, linear information filter allowing rapid updating of large maps. The visual navigation enabled by the work in this thesis represents a new capability for remotely operated vehicles, and an enabling capability for a new generation of autonomous vehicles which explore and interact with remote, unknown and unstructured underwater environments. The real-time mosaic can be used on current tethered vehicles to create pilot aids and provide a vehicle user with situational awareness of the local environment and the position of the vehicle within it. For autonomous vehicles, the visual navigation system enables precise environment-relative positioning and mapping, without requiring external navigation systems, opening the way for ever-expanding autonomous exploration capabilities. The utility of this system was demonstrated in the field at sites of scientific interest using the ROVs Ventana and Tiburon operated by the Monterey Bay Aquarium Research Institute. A number of sites in and around Monterey Bay, California were mosaicked using the system, culminating in a complete imaging of the wreck site of the USS Macon , where real-time visual mosaics containing thousands of images were generated while navigating using only sensor systems on board the vehicle.

  1. Medical robotics.

    PubMed

    Ferrigno, Giancarlo; Baroni, Guido; Casolo, Federico; De Momi, Elena; Gini, Giuseppina; Matteucci, Matteo; Pedrocchi, Alessandra

    2011-01-01

    Information and communication technology (ICT) and mechatronics play a basic role in medical robotics and computer-aided therapy. In the last three decades, in fact, ICT technology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service and professional robotics as well as other technologies, as surgical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the recovery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better hardware (e.g., microrobots) and more sophisticated algorithms(e.g., machine learning and other cognitive approaches)has largely increased the field of applications of these technologies,making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management.

  2. A tesselated probabilistic representation for spatial robot perception and navigation

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto

    1989-01-01

    The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations.

  3. Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements

    PubMed Central

    Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962

  4. Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.

    PubMed

    Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.

  5. Land, sea, and air unmanned systems research and development at SPAWAR Systems Center Pacific

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa G.; Laird, Robin; Kogut, Greg; Andrews, John; Fletcher, Barbara; Webber, Todd; Arrieta, Rich; Everett, H. R.

    2009-05-01

    The Space and Naval Warfare (SPAWAR) Systems Center Pacific (SSC Pacific) has a long and extensive history in unmanned systems research and development, starting with undersea applications in the 1960s and expanding into ground and air systems in the 1980s. In the ground domain, we are addressing force-protection scenarios using large unmanned ground vehicles (UGVs) and fixed sensors, and simultaneously pursuing tactical and explosive ordnance disposal (EOD) operations with small man-portable robots. Technology thrusts include improving robotic intelligence and functionality, autonomous navigation and world modeling in urban environments, extended operational range of small teleoperated UGVs, enhanced human-robot interaction, and incorporation of remotely operated weapon systems. On the sea surface, we are pushing the envelope on dynamic obstacle avoidance while conforming to established nautical rules-of-the-road. In the air, we are addressing cooperative behaviors between UGVs and small vertical-takeoff- and-landing unmanned air vehicles (UAVs). Underwater applications involve very shallow water mine countermeasures, ship hull inspection, oceanographic data collection, and deep ocean access. Specific technology thrusts include fiber-optic communications, adaptive mission controllers, advanced navigation techniques, and concepts of operations (CONOPs) development. This paper provides a review of recent accomplishments and current status of a number of projects in these areas.

  6. Human-Robot Interaction

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.

  7. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images †

    PubMed Central

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-01-01

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications. PMID:28604624

  8. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.

    PubMed

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-06-12

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  9. [Experimental study of angiography using vascular interventional robot-2(VIR-2)].

    PubMed

    Tian, Zeng-min; Lu, Wang-sheng; Liu, Da; Wang, Da-ming; Guo, Shu-xiang; Xu, Wu-yi; Jia, Bo; Zhao, De-peng; Liu, Bo; Gao, Bao-feng

    2012-06-01

    To verify the feasibility and safety of new vascular interventional robot system used in vascular interventional procedures. Vascular interventional robot type-2 (VIR-2) included master-slave parts of body propulsion system, image navigation systems and force feedback system, the catheter movement could achieve under automatic control and navigation, force feedback was integrated real-time, followed by in vitro pre-test in vascular model and cerebral angiography in dog. Surgeon controlled vascular interventional robot remotely, the catheter was inserted into the intended target, the catheter positioning error and the operation time would be evaluated. In vitro pre-test and animal experiment went well; the catheter can enter any branch of vascular. Catheter positioning error was less than 1 mm. The angiography operation in animal was carried out smoothly without complication; the success rate of the operation was 100% and the entire experiment took 26 and 30 minutes, efficiency was slightly improved compared with the VIR-1, and the time what staff exposed to the DSA machine was 0 minute. The resistance of force sensor can be displayed to the operator to provide a security guarantee for the operation. No surgical complications. VIR-2 is safe and feasible, and can achieve the catheter remote operation and angiography; the master-slave system meets the characteristics of traditional procedure. The three-dimensional image can guide the operation more smoothly; force feedback device provides remote real-time haptic information to provide security for the operation.

  10. Design of an autonomous exterior security robot

    NASA Technical Reports Server (NTRS)

    Myers, Scott D.

    1994-01-01

    This paper discusses the requirements and preliminary design of robotic vehicle designed for performing autonomous exterior perimeter security patrols around warehouse areas, ammunition supply depots, and industrial parks for the U.S. Department of Defense. The preliminary design allows for the operation of up to eight vehicles in a six kilometer by six kilometer zone with autonomous navigation and obstacle avoidance. In addition to detection of crawling intruders at 100 meters, the system must perform real-time inventory checking and database comparisons using a microwave tags system.

  11. Sign detection for autonomous navigation

    NASA Astrophysics Data System (ADS)

    Goodsell, Thomas G.; Snorrason, Magnus S.; Cartwright, Dustin; Stube, Brian; Stevens, Mark R.; Ablavsky, Vitaly X.

    2003-09-01

    Mobile robots currently cannot detect and read arbitrary signs. This is a major hindrance to mobile robot usability, since they cannot be tasked using directions that are intuitive to humans. It also limits their ability to report their position relative to intuitive landmarks. Other researchers have demonstrated some success on traffic sign recognition, but using template based methods limits the set of recognizable signs. There is a clear need for a sign detection and recognition system that can process a much wider variety of signs: traffic signs, street signs, store-name signs, building directories, room signs, etc. We are developing a system for Sign Understanding in Support of Autonomous Navigation (SUSAN), that detects signs from various cues common to most signs: vivid colors, compact shape, and text. We have demonstrated the feasibility of our approach on a variety of signs in both indoor and outdoor locations.

  12. Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manges, W.W.; Hamel, W.R.; Weisbin, C.R.

    1988-01-01

    The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less

  13. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    NASA Astrophysics Data System (ADS)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  14. Methods and Apparatus for Autonomous Robotic Control

    NASA Technical Reports Server (NTRS)

    Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)

    2017-01-01

    Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.

  15. Novel robotic catheter manipulation system integrated with remote magnetic navigation for fully remote ablation of atrial tachyarrhythmias: a two-centre evaluation.

    PubMed

    Nölker, Georg; Gutleben, Klaus-Jürgen; Muntean, Bogdan; Vogt, Jürgen; Horstkotte, Dieter; Dabiri Abkenari, Lara; Akca, Ferdi; Szili-Torok, Tamas

    2012-12-01

    Studies have shown that remote magnetic navigation is safe and effective for ablation of atrial arrhythmias, although optimal outcomes often require frequent manual manipulation of a circular mapping catheter. The Vdrive robotic system ('Vdrive') was designed for remote navigation of circular mapping catheters to enable a fully remote procedure. This study details the first human clinical experience with remote circular catheter manipulation in the left atrium. This was a prospective, multi-centre, non-randomized consecutive case series that included patients presenting for catheter ablation of left atrial arrhythmias. Remote systems were used exclusively to manipulate both the circular mapping catheter and the ablation catheter. Patients were followed through hospital discharge. Ninety-four patients were included in the study, including 23 with paroxysmal atrial fibrillation (AF), 48 with persistent AF, and 15 suffering from atrial tachycardias. The population was predominately male (77%) with a mean age of 60.5 ± 11.7 years. The Vdrive was used for remote navigation between veins, creation of chamber maps, and gap identification with segmental isolation. The intended acute clinical endpoints were achieved in 100% of patients. Mean case time was 225.9 ± 70.5 min. Three patients (3.2%) crossed over to manual circular mapping catheter navigation. There were no adverse events related to the use of the remote manipulation system. The results of this study demonstrate that remote manipulation of a circular mapping catheter in the ablation of atrial arrhythmias is feasible and safe. Prospective randomized studies are needed to prove efficiency improvements over manual techniques.

  16. Scene Segmentation For Autonomous Robotic Navigation Using Sequential Laser Projected Structured Light

    NASA Astrophysics Data System (ADS)

    Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.

    1987-01-01

    Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.

  17. Deep space telecommunications, navigation, and information management. Support of the space exploration initiative

    NASA Astrophysics Data System (ADS)

    Hall, Justin R.; Hastrup, Rolf C.

    The United States Space Exploration Initiative (SEI) calls for the charting of a new and evolving manned course to the Moon, Mars, and beyond. This paper discusses key challenges in providing effective deep space telecommunications, navigation, and information management (TNIM) architectures and designs for Mars exploration support. The fundamental objectives are to provide the mission with means to monitor and control mission elements, acquire engineering, science, and navigation data, compute state vectors and navigate, and move these data efficiently and automatically between mission nodes for timely analysis and decision-making. Although these objectives do not depart, fundamentally, from those evolved over the past 30 years in supporting deep space robotic exploration, there are several new issues. This paper focuses on summarizing new requirements, identifying related issues and challenges, responding with concepts and strategies which are enabling, and, finally, describing candidate architectures, and driving technologies. The design challenges include the attainment of: 1) manageable interfaces in a large distributed system, 2) highly unattended operations for in-situ Mars telecommunications and navigation functions, 3) robust connectivity for manned and robotic links, 4) information management for efficient and reliable interchange of data between mission nodes, and 5) an adequate Mars-Earth data rate.

  18. Semi autonomous mine detection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas Few; Roelof Versteeg; Herman Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIKmore » was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.« less

  19. FLEXnav: a fuzzy logic expert dead-reckoning system for the Segway RMP

    NASA Astrophysics Data System (ADS)

    Ojeda, Lauro; Raju, Mukunda; Borenstein, Johann

    2004-09-01

    Most mobile robots use a combination of absolute and relative sensing techniques for position estimation. Relative positioning techniques are generally known as dead-reckoning. Many systems use odometry as their only dead-reckoning means. However, in recent years fiber optic gyroscopes have become more affordable and are being used on many platforms to supplement odometry, especially in indoor applications. Still, if the terrain is not level (i.e., rugged or rolling terrain), the tilt of the vehicle introduces errors into the conversion of gyro readings to vehicle heading. In order to overcome this problem vehicle tilt must be measured and factored into the heading computation. A unique new mobile robot is the Segway Robotics Mobility Platform (RMP). This functionally close relative of the innovative Segway Human Transporter (HT) stabilizes a statically unstable single-axle robot dynamically, based on the principle of the inverted pendulum. While this approach works very well for human transportation, it introduces as unique set of challenges to navigation equipment using an onboard gyro. This is due to the fact that in operation the Segway RMP constantly changes its forward tilt, to prevent dynamically falling over. This paper introduces our new Fuzzy Logic Expert rule-based navigation (FLEXnav) method for fusing data from multiple gyroscopes and accelerometers in order to estimate accurately the attitude (i.e., heading and tilt) of a mobile robot. The attitude information is then further fused with wheel encoder data to estimate the three-dimensional position of the mobile robot. We have further extended this approach to include the special conditions of operation on the Segway RMP. The paper presents experimental results of a Segway RMP equipped with our system and running over moderately rugged terrain.

  20. Automated site characterization for robotic sample acquisition systems

    NASA Astrophysics Data System (ADS)

    Scholl, Marija S.; Eberlein, Susan J.

    1993-04-01

    A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.

  1. Neural Network Based Sensory Fusion for Landmark Detection

    NASA Technical Reports Server (NTRS)

    Kumbla, Kishan -K.; Akbarzadeh, Mohammad R.

    1997-01-01

    NASA is planning to send numerous unmanned planetary missions to explore the space. This requires autonomous robotic vehicles which can navigate in an unstructured, unknown, and uncertain environment. Landmark based navigation is a new area of research which differs from the traditional goal-oriented navigation, where a mobile robot starts from an initial point and reaches a destination in accordance with a pre-planned path. The landmark based navigation has the advantage of allowing the robot to find its way without communication with the mission control station and without exact knowledge of its coordinates. Current algorithms based on landmark navigation however pose several constraints. First, they require large memories to store the images. Second, the task of comparing the images using traditional methods is computationally intensive and consequently real-time implementation is difficult. The method proposed here consists of three stages, First stage utilizes a heuristic-based algorithm to identify significant objects. The second stage utilizes a neural network (NN) to efficiently classify images of the identified objects. The third stage combines distance information with the classification results of neural networks for efficient and intelligent navigation.

  2. A lightweight, inexpensive robotic system for insect vision.

    PubMed

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Assessment of Navigation Using a Hybrid Cognitive/Metric World Model

    DTIC Science & Technology

    2015-01-01

    The robot failed to avoid the stairs of the church. Table A-26 Assessment of vignette 1, path 6b, by researcher TBS Navigate left of the...NOTES 14. ABSTRACT One goal of the US Army Research Laboratory’s Robotic Collaborative Technology Alliance is to develop a cognitive architecture...that would allow a robot to operate on both the semantic and metric levels. As such, both symbolic and metric information would be interpreted within

  4. Nonuniform Deployment of Autonomous Agents in Harbor-Like Environments

    DTIC Science & Technology

    2014-11-12

    ith agent than to all other agents. Interested readers are referred to [55] for the comprehensive study on Voronoi partitioning and its applications...robots: An rfid approach, PhD dissertation, School of Electrical Engi- neering and Computer Science, University of Ottawa (October 2012). [55] A. Okabe, B...Gueaieb, A stochastic approach of mobile robot navigation using customized rfid sys- tems, International Conference on Signals, Circuits and Systems

  5. Object Detection Techniques Applied on Mobile Robot Semantic Navigation

    PubMed Central

    Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto

    2014-01-01

    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency. PMID:24732101

  6. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    A robot from the Intrepid Systems team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  7. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The Stellar Automation Systems team poses for a picture with their robot after attempting the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  8. Maneuverability and mobility in palm-sized legged robots

    NASA Astrophysics Data System (ADS)

    Kohut, Nicholas J.; Birkmeyer, Paul M.; Peterson, Kevin C.; Fearing, Ronald S.

    2012-06-01

    Palm sized legged robots show promise for military and civilian applications, including exploration of hazardous or difficult to reach places, search and rescue, espionage, and battlefield reconnaissance. However, they also face many technical obstacles, including- but not limited to- actuator performance, weight constraints, processing power, and power density. This paper presents an overview of several robots from the Biomimetic Millisystems Laboratory at UC Berkeley, including the OctoRoACH, a steerable, running legged robot capable of basic navigation and equipped with a camera and active tail; CLASH, a dynamic climbing robot; and BOLT, a hybrid crawling and flying robot. The paper also discusses, and presents some preliminary solutions to, the technical obstacles listed above plus issues such as robustness to unstructured environments, limited sensing and communication bandwidths, and system integration.

  9. On learning navigation behaviors for small mobile robots with reservoir computing architectures.

    PubMed

    Antonelo, Eric Aislan; Schrauwen, Benjamin

    2015-04-01

    This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal.

  10. A Behavior-Based Strategy for Single and Multi-Robot Autonomous Exploration

    PubMed Central

    Cepeda, Jesus S.; Chaimowicz, Luiz; Soto, Rogelio; Gordillo, José L.; Alanís-Reyes, Edén A.; Carrillo-Arce, Luis C.

    2012-01-01

    In this paper, we consider the problem of autonomous exploration of unknown environments with single and multiple robots. This is a challenging task, with several potential applications. We propose a simple yet effective approach that combines a behavior-based navigation with an efficient data structure to store previously visited regions. This allows robots to safely navigate, disperse and efficiently explore the environment. A series of experiments performed using a realistic robotic simulator and a real testbed scenario demonstrate that our technique effectively distributes the robots over the environment and allows them to quickly accomplish their mission in large open spaces, narrow cluttered environments, dead-end corridors, as well as rooms with minimum exits.

  11. Alignment Jig for the Precise Measurement of THz Radiation

    NASA Technical Reports Server (NTRS)

    Javadi, Hamid H.

    2009-01-01

    A miniaturized instrumentation package comprising a (1) Global Positioning System (GPS) receiver, (2) an inertial measurement unit (IMU) consisting largely of surface-micromachined sensors of the microelectromechanical systems (MEMS) type, and (3) a microprocessor, all residing on a single circuit board, is part of the navigation system of a compact robotic spacecraft intended to be released from a larger spacecraft [e.g., the International Space Station (ISS)] for exterior visual inspection of the larger spacecraft. Variants of the package may also be useful in terrestrial collision-detection and -avoidance applications. The navigation solution obtained by integrating the IMU outputs is fed back to a correlator in the GPS receiver to aid in tracking GPS signals. The raw GPS and IMU data are blended in a Kalman filter to obtain an optimal navigation solution, which can be supplemented by range and velocity data obtained by use of (l) a stereoscopic pair of electronic cameras aboard the robotic spacecraft and/or (2) a laser dynamic range imager aboard the ISS. The novelty of the package lies mostly in those aspects of the design of the MEMS IMU that pertain to controlling mechanical resonances and stabilizing scale factors and biases.

  12. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center’s Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of “dull, dirty or dangerous” tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center’s newest development areas can provide crew with low mass exercise capability and also augment an astronaut’s strength while wearing a space suit.This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center’s Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  13. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center's Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of "dull, dirty or dangerous" tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center's newest development areas can provide crew with low mass exercise capability and also augment an astronaut's strength while wearing a space suit. This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center's Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  14. Automation and robotics - Key to productivity. [in industry and space

    NASA Technical Reports Server (NTRS)

    Cohen, A.

    1985-01-01

    The automated and robotic systems requirements of the NASA Space Station are prompted by maintenance, repair, servicing and assembly requirements. Trend analyses, fault diagnoses, and subsystem status assessments for the Station's electrical power, guidance, navigation, control, data management and environmental control subsystems will be undertaken by cybernetic expert systems; this will reduce or eliminate on-board or ground facility activities that would otherwise be essential, enhancing system productivity. Additional capabilities may also be obtained through the incorporation of even a limited amount of artificial intelligence in the controllers of the various Space Station systems.

  15. Telemedicine and robotics: paving the way to the globalization of surgery.

    PubMed

    Senapati, S; Advincula, A P

    2005-12-01

    The concept of delivering health services at a distance, or telemedicine is becoming an emerging tool for the field of surgery. For the surgical services, telepresence surgery through robotics is gradually being incorporated into health care practices. This article will provide a brief overview of the principles surrounding telemedicine and telepresence surgery as they specifically relate to robotics. Where limitations have been reached in laparoscopy, robotics has allowed further steps forward. The development of robotics in medicine has been a progression from passive to immersive technology. In gynecology, the utilization of robotics has evolved from the use of Aesop, a robotic arm for camera manipulation, to full robotic systems such as Zeus, and the daVinci surgical system. These systems have not only been used directly for a variety of procedures but have also become a useful tool for conferencing and the mentoring of surgeons from afar. As this mode of technology becomes assimilated into the culture of surgery and medicine globally, caution must be taken to carefully navigate the economic, legal and ethical implications of telemedicine. Despite the challenges faced, telepresence surgery holds promise for more widespread applications.

  16. A neural network-based exploratory learning and motor planning system for co-robots

    PubMed Central

    Galbraith, Byron V.; Guenther, Frank H.; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or “learning by doing,” an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object. PMID:26257640

  17. A neural network-based exploratory learning and motor planning system for co-robots.

    PubMed

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  18. Autonomous detection of indoor and outdoor signs

    NASA Astrophysics Data System (ADS)

    Holden, Steven; Snorrason, Magnus; Goodsell, Thomas; Stevens, Mark R.

    2005-05-01

    Most goal-oriented mobile robot tasks involve navigation to one or more known locations. This is generally done using GPS coordinates and landmarks outdoors, or wall-following and fiducial marks indoors. Such approaches ignore the rich source of navigation information that is already in place for human navigation in all man-made environments: signs. A mobile robot capable of detecting and reading arbitrary signs could be tasked using directions that are intuitive to hu-mans, and it could report its location relative to intuitive landmarks (a street corner, a person's office, etc.). Such ability would not require active marking of the environment and would be functional in the absence of GPS. In this paper we present an updated version of a system we call Sign Understanding in Support of Autonomous Navigation (SUSAN). This system relies on cues common to most signs, the presence of text, vivid color, and compact shape. By not relying on templates, SUSAN can detect a wide variety of signs: traffic signs, street signs, store-name signs, building directories, room signs, etc. In this paper we focus on the text detection capability. We present results summarizing probability of detection and false alarm rate across many scenes containing signs of very different designs and in a variety of lighting conditions.

  19. Robotics in Arthroplasty: A Comprehensive Review.

    PubMed

    Jacofsky, David J; Allen, Mark

    2016-10-01

    Robotic-assisted orthopedic surgery has been available clinically in some form for over 2 decades, claiming to improve total joint arthroplasty by enhancing the surgeon's ability to reproduce alignment and therefore better restore normal kinematics. Various current systems include a robotic arm, robotic-guided cutting jigs, and robotic milling systems with a diversity of different navigation strategies using active, semiactive, or passive control systems. Semiactive systems have become dominant, providing a haptic window through which the surgeon is able to consistently prepare an arthroplasty based on preoperative planning. A review of previous designs and clinical studies demonstrate that these robotic systems decrease variability and increase precision, primarily focusing on component positioning and alignment. Some early clinical results indicate decreased revision rates and improved patient satisfaction with robotic-assisted arthroplasty. The future design objectives include precise planning and even further improved consistent intraoperative execution. Despite this cautious optimism, many still wonder whether robotics will ultimately increase cost and operative time without objectively improving outcomes. Over the long term, every industry that has seen robotic technology be introduced, ultimately has shown an increase in production capacity, improved accuracy and precision, and lower cost. A new generation of robotic systems is now being introduced into the arthroplasty arena, and early results with unicompartmental knee arthroplasty and total hip arthroplasty have demonstrated improved accuracy of placement, improved satisfaction, and reduced complications. Further studies are needed to confirm the cost effectiveness of these technologies. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Visual Navigation Constructing and Utilizing Simple Maps of an Indoor Environment

    DTIC Science & Technology

    1989-03-01

    places are con- nected to eachother , so that the robot may plan routes. On a more advanced level. navigation nmay require an understanding of the meaning...two vertical lines, suitably separated from eachother . through which it tries to lead the robot. CHAPTER 1. L’TRODUCTION 14 1.4 Context of the Project...the observer will have no trouble in determining where the wall is. A robot, with far less processing power than humans have. might be able determine

  1. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  2. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  3. Endocavity Ultrasound Probe Manipulators

    PubMed Central

    Stoianovici, Dan; Kim, Chunwoo; Schäfer, Felix; Huang, Chien-Ming; Zuo, Yihe; Petrisor, Doru; Han, Misop

    2014-01-01

    We developed two similar structure manipulators for medical endocavity ultrasound probes with 3 and 4 degrees of freedom (DoF). These robots allow scanning with ultrasound for 3-D imaging and enable robot-assisted image-guided procedures. Both robots use remote center of motion kinematics, characteristic of medical robots. The 4-DoF robot provides unrestricted manipulation of the endocavity probe. With the 3-DoF robot the insertion motion of the probe must be adjusted manually, but the device is simpler and may also be used to manipulate external-body probes. The robots enabled a novel surgical approach of using intraoperative image-based navigation during robot-assisted laparoscopic prostatectomy (RALP), performed with concurrent use of two robotic systems (Tandem, T-RALP). Thus far, a clinical trial for evaluation of safety and feasibility has been performed successfully on 46 patients. This paper describes the architecture and design of the robots, the two prototypes, control features related to safety, preclinical experiments, and the T-RALP procedure. PMID:24795525

  4. Autonomous assistance navigation for robotic wheelchairs in confined spaces.

    PubMed

    Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F

    2010-01-01

    In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.

  5. Application of the HeartLander Crawling Robot for Injection of a Thermally Sensitive Anti-Remodeling Agent for Myocardial Infarction Therapy

    PubMed Central

    Chapman, Michael P.; López González, Jose L.; Goyette, Brina E.; Fujimoto, Kazuro L.; Ma, Zuwei; Wagner, William R.; Zenati, Marco A.; Riviere, Cameron N.

    2011-01-01

    The injection of a mechanical bulking agent into the left ventricular (LV) wall of the heart has shown promise as a therapy for maladaptive remodeling of the myocardium after myocardial infarct (MI). The HeartLander robotic crawler presented itself as an ideal vehicle for minimally-invasive, highly accurate epicardial injection of such an agent. Use of the optimal bulking agent, a thermosetting hydrogel developed by our group, presents a number of engineering obstacles, including cooling of the miniaturized injection system while the robot is navigating in the warm environment of a living patient. We present herein a demonstration of an integrated miniature cooling and injection system in the HeartLander crawling robot, that is fully biocompatible and capable of multiple injections of a thermosetting hydrogel into dense animal tissue while the entire system is immersed in a 37°C water bath. PMID:21096276

  6. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  7. Human-Robot Interaction

    NASA Technical Reports Server (NTRS)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  8. Learned navigation in unknown terrains: A retraction method

    NASA Technical Reports Server (NTRS)

    Rao, Nageswara S. V.; Stoltzfus, N.; Iyengar, S. Sitharama

    1989-01-01

    The problem of learned navigation of a circular robot R, of radius delta (is greater than or equal to 0), through a terrain whose model is not a-priori known is considered. Two-dimensional finite-sized terrains populated by an unknown (but, finite) number of simple polygonal obstacles are also considered. The number and locations of the vertices of each obstacle are unknown to R. R is equipped with a sensor system that detects all vertices and edges that are visible from its present location. In this context two problems are covered. In the visit problem, the robot is required to visit a sequence of destination points, and in the terrain model acquisition problem, the robot is required to acquire the complete model of the terrain. An algorithmic framework is presented for solving these two problems using a retraction of the freespace onto the Voronoi diagram of the terrain. Algorithms are then presented to solve the visit problem and the terrain model acquisition problem.

  9. Soft computing-based terrain visual sensing and data fusion for unmanned ground robotic systems

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir

    2006-05-01

    In this paper, we have primarily discussed technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain visual clues. The Kalman Filtering technique is applied for aggregative fusion of sub-terrain assessment results. The last two terrain classifiers are shown to have remarkable capability for terrain traversability assessment of natural terrains. We have conducted a comparative performance evaluation of all three terrain classifiers and presented the results in this paper.

  10. Electromagnetic tracking of flexible robotic catheters enables "assisted navigation" and brings automation to endovascular navigation in an in vitro study.

    PubMed

    Schwein, Adeline; Kramer, Benjamin; Chinnadurai, Ponraj; Virmani, Neha; Walker, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean

    2018-04-01

    Combining three-dimensional (3D) catheter control with electromagnetic (EM) tracking-based navigation significantly reduced fluoroscopy time and improved robotic catheter movement quality in a previous in vitro pilot study. The aim of this study was to expound on previous results and to expand the value of EM tracking with a novel feature, assistednavigation, allowing automatic catheter orientation and semiautomatic vessel cannulation. Eighteen users navigated a robotic catheter in an aortic aneurysm phantom using an EM guidewire and a modified 9F robotic catheter with EM sensors at the tip of both leader and sheath. All users cannulated two targets, the left renal artery and posterior gate, using four visualization modes: (1) Standard fluoroscopy (control). (2) 2D biplane fluoroscopy showing real-time virtual catheter localization and orientation from EM tracking. (3) 2D biplane fluoroscopy with novel EM assisted navigation allowing the user to define the target vessel. The robotic catheter orients itself automatically toward the target; the user then only needs to advance the guidewire following this predefined optimized path to catheterize the vessel. Then, while advancing the catheter over the wire, the assisted navigation automatically modifies catheter bending and rotation in order to ensure smooth progression, avoiding loss of wire access. (4) Virtual 3D representation of the phantom showing real-time virtual catheter localization and orientation. Standard fluoroscopy was always available; cannulation and fluoroscopy times were noted for every mode and target cannulation. Quality of catheter movement was assessed by measuring the number of submovements of the catheter using the 3D coordinates of the EM sensors. A t-test was used to compare the standard fluoroscopy mode against EM tracking modes. EM tracking significantly reduced the mean fluoroscopy time (P < .001) and the number of submovements (P < .02) for both cannulation tasks. For the posterior gate, mean cannulation time was also significantly reduced when using EM tracking (P < .001). The use of novel EM assisted navigation feature (mode 3) showed further reduced cannulation time for the posterior gate (P = .002) and improved quality of catheter movement for the left renal artery cannulation (P = .021). These results confirmed the findings of a prior study that highlighted the value of combining 3D robotic catheter control and 3D navigation to improve safety and efficiency of endovascular procedures. The novel EM assisted navigation feature augments the robotic master/slave concept with automated catheter orientation toward the target and shows promising results in reducing procedure time and improving catheter motion quality. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  11. Flexible robotics with electromagnetic tracking improves safety and efficiency during in vitro endovascular navigation.

    PubMed

    Schwein, Adeline; Kramer, Ben; Chinnadurai, Ponraj; Walker, Sean; O'Malley, Marcia; Lumsden, Alan; Bismuth, Jean

    2017-02-01

    One limitation of the use of robotic catheters is the lack of real-time three-dimensional (3D) localization and position updating: they are still navigated based on two-dimensional (2D) X-ray fluoroscopic projection images. Our goal was to evaluate whether incorporating an electromagnetic (EM) sensor on a robotic catheter tip could improve endovascular navigation. Six users were tasked to navigate using a robotic catheter with incorporated EM sensors in an aortic aneurysm phantom. All users cannulated two anatomic targets (left renal artery and posterior "gate") using four visualization modes: (1) standard fluoroscopy mode (control), (2) 2D fluoroscopy mode showing real-time virtual catheter orientation from EM tracking, (3) 3D model of the phantom with anteroposterior and endoluminal view, and (4) 3D model with anteroposterior and lateral view. Standard X-ray fluoroscopy was always available. Cannulation and fluoroscopy times were noted for every mode. 3D positions of the EM tip sensor were recorded at 4 Hz to establish kinematic metrics. The EM sensor-incorporated catheter navigated as expected according to all users. The success rate for cannulation was 100%. For the posterior gate target, mean cannulation times in minutes:seconds were 8:12, 4:19, 4:29, and 3:09, respectively, for modes 1, 2, 3 and 4 (P = .013), and mean fluoroscopy times were 274, 20, 29, and 2 seconds, respectively (P = .001). 3D path lengths, spectral arc length, root mean dimensionless jerk, and number of submovements were significantly improved when EM tracking was used (P < .05), showing higher quality of catheter movement with EM navigation. The EM tracked robotic catheter allowed better real-time 3D orientation, facilitating navigation, with a reduction in cannulation and fluoroscopy times and improvement of motion consistency and efficiency. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  12. Evaluation of the ROSA™ Spine robot for minimally invasive surgical procedures.

    PubMed

    Lefranc, M; Peltier, J

    2016-10-01

    The ROSA® robot (Medtech, Montpellier, France) is a new medical device designed to assist the surgeon during minimally invasive spine procedures. The device comprises a patient-side cart (bearing the robotic arm and a workstation) and an optical navigation camera. The ROSA® Spine robot enables accurate pedicle screw placement. Thanks to its robotic arm and navigation abilities, the robot monitors movements of the spine throughout the entire surgical procedure and thus enables accurate, safe arthrodesis for the treatment of degenerative lumbar disc diseases, exactly as planned by the surgeon. Development perspectives include (i) assistance at all levels of the spine, (ii) improved planning abilities (virtualization of the entire surgical procedure) and (iii) use for almost any percutaneous spinal procedures not limited in screw positioning such as percutaneous endoscopic lumbar discectomy, intracorporeal implant positioning, over te top laminectomy or radiofrequency ablation.

  13. A networked modular hardware and software system for MRI-guided robotic prostate interventions

    NASA Astrophysics Data System (ADS)

    Su, Hao; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Cole, Gregory; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare; Fischer, Gregory S.

    2012-02-01

    Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast, and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has a better than 0.01mm positioning accuracy.

  14. Synopsis of Precision Landing and Hazard Avoidance (PL&HA) Capabilities for Space Exploration

    NASA Technical Reports Server (NTRS)

    Robertson, Edward A.

    2017-01-01

    Until recently, robotic exploration missions to the Moon, Mars, and other solar system bodies relied upon controlled blind landings. Because terrestrial techniques for terrain relative navigation (TRN) had not yet been evolved to support space exploration, landing dispersions were driven by the capabilities of inertial navigation systems combined with surface relative altimetry and velocimetry. Lacking tight control over the actual landing location, mission success depended on the statistical vetting of candidate landing areas within the predicted landing dispersion ellipse based on orbital reconnaissance data, combined with the ability of the spacecraft to execute a controlled landing in terms of touchdown attitude, attitude rates, and velocity. In addition, the sensors, algorithms, and processing technologies required to perform autonomous hazard detection and avoidance in real time during the landing sequence were not yet available. Over the past decade, NASA has invested substantial resources on the development, integration, and testing of autonomous precision landing and hazard avoidance (PL&HA) capabilities. In addition to substantially improving landing accuracy and safety, these autonomous PL&HA functions also offer access to targets of interest located within more rugged and hazardous terrain. Optical TRN systems are baselined on upcoming robotic landing missions to the Moon and Mars, and NASA JPL is investigating the development of a comprehensive PL&HA system for a Europa lander. These robotic missions will demonstrate and mature PL&HA technologies that are considered essential for future human exploration missions. PL&HA technologies also have applications to rendezvous and docking/berthing with other spacecraft, as well as proximity navigation, contact, and retrieval missions to smaller bodies with microgravity environments, such as asteroids.

  15. Progress in building a cognitive vision system

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  16. Navigation through unknown and dynamic open spaces using topological notions

    NASA Astrophysics Data System (ADS)

    Miguel-Tomé, Sergio

    2018-04-01

    Until now, most algorithms used for navigation have had the purpose of directing system towards one point in space. However, humans communicate tasks by specifying spatial relations among elements or places. In addition, the environments in which humans develop their activities are extremely dynamic. The only option that allows for successful navigation in dynamic and unknown environments is making real-time decisions. Therefore, robots capable of collaborating closely with human beings must be able to make decisions based on the local information registered by the sensors and interpret and express spatial relations. Furthermore, when one person is asked to perform a task in an environment, this task is communicated given a category of goals so the person does not need to be supervised. Thus, two problems appear when one wants to create multifunctional robots: how to navigate in dynamic and unknown environments using spatial relations and how to accomplish this without supervision. In this article, a new architecture to address the two cited problems is presented, called the topological qualitative navigation architecture. In previous works, a qualitative heuristic called the heuristic of topological qualitative semantics (HTQS) has been developed to establish and identify spatial relations. However, that heuristic only allows for establishing one spatial relation with a specific object. In contrast, navigation requires a temporal sequence of goals with different objects. The new architecture attains continuous generation of goals and resolves them using HTQS. Thus, the new architecture achieves autonomous navigation in dynamic or unknown open environments.

  17. Human-Robot Interaction Directed Research Project

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot.

  18. Absolute position calculation for a desktop mobile rehabilitation robot based on three optical mouse sensors.

    PubMed

    Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry

    2011-01-01

    ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.

  19. Meeting the challenges of installing a mobile robotic system

    NASA Technical Reports Server (NTRS)

    Decorte, Celeste

    1994-01-01

    The challenges of integrating a mobile robotic system into an application environment are many. Most problems inherent to installing the mobile robotic system fall into one of three categories: (1) the physical environment - location(s) where, and conditions under which, the mobile robotic system will work; (2) the technological environment - external equipment with which the mobile robotic system will interact; and (3) the human environment - personnel who will operate and interact with the mobile robotic system. The successful integration of a mobile robotic system into these three types of application environment requires more than a good pair of pliers. The tools for this job include: careful planning, accurate measurement data (as-built drawings), complete technical data of systems to be interfaced, sufficient time and attention of key personnel for training on how to operate and program the robot, on-site access during installation, and a thorough understanding and appreciation - by all concerned - of the mobile robotic system's role in the security mission at the site, as well as the machine's capabilities and limitations. Patience, luck, and a sense of humor are also useful tools to keep handy during a mobile robotic system installation. This paper will discuss some specific examples of problems in each of three categories, and explore approaches to solving these problems. The discussion will draw from the author's experience with on-site installations of mobile robotic systems in various applications. Most of the information discussed in this paper has come directly from knowledge learned during installations of Cybermotion's SR2 security robots. A large part of the discussion will apply to any vehicle with a drive system, collision avoidance, and navigation sensors, which is, of course, what makes a vehicle autonomous. And it is with these sensors and a drive system that the installer must become familiar in order to foresee potential trouble areas in the physical, technical, and human environment.

  20. Determining the feasibility of robotic courier medication delivery in a hospital setting.

    PubMed

    Kirschling, Thomas E; Rough, Steve S; Ludwig, Brad C

    2009-10-01

    The feasibility of a robotic courier medication delivery system in a hospital setting was evaluated. Robotic couriers are self-guiding, self-propelling robots that navigate hallways and elevators to pull an attached or integrated cart to a desired destination. A robotic courier medication delivery system was pilot tested in two patient care units at a 471-bed tertiary care academic medical center. Average transit for the existing manual medication delivery system hourly hospitalwide deliveries was 32.6 minutes. Of this, 32.3% was spent at the patient care unit and 67.7% was spent pushing the cart or waiting at an elevator. The robotic courier medication delivery system traveled as fast as 1.65 ft/sec (52% speed of the manual system) in the absence of barriers but moved at an average rate of 0.84 ft/sec (26% speed of the manual system) during the study, primarily due to hallway obstacles. The robotic courier was utilized for 50% of the possible 1750 runs during the 125-day pilot due to technical or situational difficulties. Of the runs that were sent, a total of 79 runs failed, yielding an overall 91% success rate. During the final month of the pilot, the success rate reached 95.6%. Customer satisfaction with the traditional manual delivery system was high. Customer satisfaction with deliveries declined after implementation of the robotic courier medication distribution system. A robotic courier medication delivery system was implemented but was not expanded beyond the two pilot units. Challenges of implementation included ongoing education on how to properly move the robotic courier and keeping the hallway clear of obstacles.

  1. Integrated system for single leg walking

    NASA Astrophysics Data System (ADS)

    Simmons, Reid; Krotkov, Eric; Roston, Gerry

    1990-07-01

    The Carnegie Mellon University Planetary Rover project is developing a six-legged walking robot capable of autonomously navigating, exploring, and acquiring samples in rugged, unknown environments. This report describes an integrated software system capable of navigating a single leg of the robot over rugged terrain. The leg, based on an early design of the Ambler Planetary Rover, is suspended below a carriage that slides along rails. To walk, the system creates an elevation map of the terrain from laser scanner images, plans an appropriate foothold based on terrain and geometric constraints, weaves the leg through the terrain to position it above the foothold, contacts the terrain with the foot, and applies force enough to advance the carriage along the rails. Walking both forward and backward, the system has traversed hundreds of meters of rugged terrain including obstacles too tall to step over, trenches too deep to step in, closely spaced obstacles, and sand hills. The implemented system consists of a number of task-specific processes (two for planning, two for perception, one for real-time control) and a central control process that directs the flow of communication between processes.

  2. Recent CESAR (Center for Engineering Systems Advanced Research) research activities in sensor based reasoning for autonomous machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; de Saussure, G.; Spelt, P.F.

    1988-01-01

    This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioningmore » of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.« less

  3. Robotic Inspection System for Non-Destructive Evaluation (nde) of Pipes

    NASA Astrophysics Data System (ADS)

    Mackenzie, L. D.; Pierce, S. G.; Hayward, G.

    2009-03-01

    The demand for remote inspection of pipework in the processing cells of nuclear plant provides significant challenges of access, navigation, inspection technique and data communication. Such processing cells typically contain several kilometres of densely packed pipework whose actual physical layout may be poorly documented. Access to these pipes is typically afforded through the radiation shield via a small removable concrete plug which may be several meters from the actual inspection site, thus considerably complicating practical inspection. The current research focuses on the robotic deployment of multiple NDE payloads for weld inspection along non-ferritic steel pipework (thus precluding use of magnetic traction options). A fully wireless robotic inspection platform has been developed that is capable of travelling along the outside of a pipe at any orientation, while avoiding obstacles such as pipe hangers and delivering a variety of NDE payloads. An eddy current array system provides rapid imaging capabilities for surface breaking defects while an on-board camera, in addition to assisting with navigation tasks, also allows real time image processing to identify potential defects. All sensor data can be processed by the embedded microcontroller or transmitted wirelessly back to the point of access for post-processing analysis.

  4. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  5. Three-dimensional sensor system using multistripe laser and stereo camera for environment recognition of mobile robots

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.

    2002-10-01

    In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.

  6. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    NASA Astrophysics Data System (ADS)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  7. Piezoelectrically Actuated Robotic System for MRI-Guided Prostate Percutaneous Therapy

    PubMed Central

    Su, Hao; Shang, Weijian; Cole, Gregory; Li, Gang; Harrington, Kevin; Camilo, Alexander; Tokuda, Junichi; Tempany, Clare M.; Hata, Nobuhiko; Fischer, Gregory S.

    2014-01-01

    This paper presents a fully-actuated robotic system for percutaneous prostate therapy under continuously acquired live magnetic resonance imaging (MRI) guidance. The system is composed of modular hardware and software to support the surgical workflow of intra-operative MRI-guided surgical procedures. We present the development of a 6-degree-of-freedom (DOF) needle placement robot for transperineal prostate interventions. The robot consists of a 3-DOF needle driver module and a 3-DOF Cartesian motion module. The needle driver provides needle cannula translation and rotation (2-DOF) and stylet translation (1-DOF). A custom robot controller consisting of multiple piezoelectric motor drivers provides precision closed-loop control of piezoelectric motors and enables simultaneous robot motion and MR imaging. The developed modular robot control interface software performs image-based registration, kinematics calculation, and exchanges robot commands and coordinates between the navigation software and the robot controller with a new implementation of the open network communication protocol OpenIGTLink. Comprehensive compatibility of the robot is evaluated inside a 3-Tesla MRI scanner using standard imaging sequences and the signal-to-noise ratio (SNR) loss is limited to 15%. The image deterioration due to the present and motion of robot demonstrates unobservable image interference. Twenty-five targeted needle placements inside gelatin phantoms utilizing an 18-gauge ceramic needle demonstrated 0.87 mm root mean square (RMS) error in 3D Euclidean distance based on MRI volume segmentation of the image-guided robotic needle placement procedure. PMID:26412962

  8. Visual environment recognition for robot path planning using template matched filters

    NASA Astrophysics Data System (ADS)

    Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto

    2017-08-01

    A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.

  9. Embodied cognition for autonomous interactive robots.

    PubMed

    Hoffman, Guy

    2012-10-01

    In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human-robot interaction based on recent psychological and neurological findings. Copyright © 2012 Cognitive Science Society, Inc.

  10. Future View: Web Navigation based on Learning User's Browsing Strategy

    NASA Astrophysics Data System (ADS)

    Nagino, Norikatsu; Yamada, Seiji

    In this paper, we propose a Future View system that assists user's usual Web browsing. The Future View will prefetch Web pages based on user's browsing strategies and present them to a user in order to assist Web browsing. To learn user's browsing strategy, the Future View uses two types of learning classifier systems: a content-based classifier system for contents change patterns and an action-based classifier system for user's action patterns. The results of learning is applied to crawling by Web robots, and the gathered Web pages are presented to a user through a Web browser interface. We experimentally show effectiveness of navigation using the Future View.

  11. Toward perception-based navigation using EgoSphere

    NASA Astrophysics Data System (ADS)

    Kawamura, Kazuhiko; Peters, R. Alan; Wilkes, Don M.; Koku, Ahmet B.; Sekman, Ali

    2002-02-01

    A method for perception-based egocentric navigation of mobile robots is described. Each robot has a local short-term memory structure called the Sensory EgoSphere (SES), which is indexed by azimuth, elevation, and time. Directional sensory processing modules write information on the SES at the location corresponding to the source direction. Each robot has a partial map of its operational area that it has received a priori. The map is populated with landmarks and is not necessarily metrically accurate. Each robot is given a goal location and a route plan. The route plan is a set of via-points that are not used directly. Instead, a robot uses each point to construct a Landmark EgoSphere (LES) a circular projection of the landmarks from the map onto an EgoSphere centered at the via-point. Under normal circumstances, the LES will be mostly unaffected by slight variations in the via-point location. Thus, the route plan is transformed into a set of via-regions each described by an LES. A robot navigates by comparing the next LES in its route plan to the current contents of its SES. It heads toward the indicated landmarks until its SES matches the LES sufficiently to indicate that the robot is near the suggested via-point. The proposed method is particularly useful for enabling the exchange of robust route informa-tion between robots under low data rate communications constraints. An example of such an exchange is given.

  12. Harnessing bistability for directional propulsion of soft, untethered robots.

    PubMed

    Chen, Tian; Bilal, Osama R; Shea, Kristina; Daraio, Chiara

    2018-05-29

    In most macroscale robotic systems, propulsion and controls are enabled through a physical tether or complex onboard electronics and batteries. A tether simplifies the design process but limits the range of motion of the robot, while onboard controls and power supplies are heavy and complicate the design process. Here, we present a simple design principle for an untethered, soft swimming robot with preprogrammed, directional propulsion without a battery or onboard electronics. Locomotion is achieved by using actuators that harness the large displacements of bistable elements triggered by surrounding temperature changes. Powered by shape memory polymer (SMP) muscles, the bistable elements in turn actuate the robot's fins. Our robots are fabricated using a commercially available 3D printer in a single print. As a proof of concept, we show the ability to program a vessel, which can autonomously deliver a cargo and navigate back to the deployment point.

  13. New methods of measuring and calibrating robots

    NASA Astrophysics Data System (ADS)

    Janocha, Hartmut; Diewald, Bernd

    1995-10-01

    ISO 9283 and RIA R15.05 define industrial robot parameters which are applied to compare the efficiency of different robots. Hitherto, however, no suitable measurement systems have been available. ICAROS is a system which combines photogrammetrical procedures with an inertial navigation system. For the first time, this combination allows the high-precision static and dynamic measurement of the position as well as of the orientation of the robot endeffector. Thus, not only the measuring data for the determination of all industrial robot parameters can be acquired. By integration of a new over-all-calibration procedure, ICAROS also allows the reduction of the absolute robot pose errors to the range of its repeatability. The integration of both system components as well as measurement and calibration results are presented in this paper, using a six-axes robot as example. A further approach also presented here takes into consideration not only the individual robot errors but also the tolerances of workpieces. This allows the adjustment of off-line programs of robots based on inexact or idealized CAD data in any pose. Thus the robot position which is defined relative to the workpiece to be processed, is achieved as required. This includes the possibility to transfer teached robot programs to other devices without additional expenditure. The adjustment is based on the measurement of the robot position using two miniaturized CCD cameras mounted near the endeffector which are carried along by the robot during the correction phase. In the area viewed by both cameras, the robot position is determined in relation to prominent geometry elements, e.g. lines or holes. The scheduled data to be compared therewith can either be calculated in modern off-line programming systems during robot programming, or they can be determined at the so-called master robot if a transfer of the robot program is desired.

  14. A positional estimation technique for an autonomous land vehicle in an unstructured environment

    NASA Technical Reports Server (NTRS)

    Talluri, Raj; Aggarwal, J. K.

    1990-01-01

    This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.

  15. Single-site access robot-assisted epicardial mapping with a snake robot: preparation and first clinical experience.

    PubMed

    Neuzil, Petr; Cerny, Stepan; Kralovec, Stepan; Svanidze, Oleg; Bohuslavek, Jan; Plasil, Petr; Jehlicka, Pavel; Holy, Frantisek; Petru, Jan; Kuenzler, Richard; Sediva, Lucie

    2013-06-01

    CardioARM, a highly flexible "snakelike" medical robotic system (Medrobotics, Raynham, MA), has been developed to allow physicians to view, access, and perform complex procedures intrapericardially on the beating heart through a single-access port. Transthoracic epicardial catheter mapping and ablation has emerged as a strategy to treat arrhythmias, particularly ventricular arrhythmias, originating from the epicardial surface. The aim of our investigation was to determine whether the CardioARM could be used to diagnose and treat ventricular tachycardia (VT) of epicardial origin. Animal and clinical studies of the CardioARM flexible robot were performed in hybrid surgical-electrophysiology settings. In a porcine model study, single-port pericardial access, navigation, mapping, and ablation were performed in nine animals. The device was then used in a small, single-center feasibility clinical study. Three patients, all with drug-refractory VT and multiple failed endocardial ablation attempts, underwent epicardial mapping with the flexible robot. In all nine animals, navigation, mapping, and ablation were successful without hemodynamic compromise. In the human study, all three patients demonstrated a favorable safety profile, with no major adverse events through a 30-day follow-up. Two cases achieved technical success, in which an electroanatomic map of the epicardial ventricle surface was created; in the third case, blood obscured visualization. These results, although based on a limited number of experimental animals and patients, show promise and suggest that further clinical investigation on the use of the flexible robot in patients requiring epicardial mapping of VT is warranted.

  16. Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits

    NASA Astrophysics Data System (ADS)

    Snider, Ross K.; Arathorn, David W.

    2006-05-01

    A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.

  17. Teleoperation of steerable flexible needles by combining kinesthetic and vibratory feedback.

    PubMed

    Pacchierotti, Claudio; Abayazid, Momen; Misra, Sarthak; Prattichizzo, Domenico

    2014-01-01

    Needle insertion in soft-tissue is a minimally invasive surgical procedure that demands high accuracy. In this respect, robotic systems with autonomous control algorithms have been exploited as the main tool to achieve high accuracy and reliability. However, for reasons of safety and responsibility, autonomous robotic control is often not desirable. Therefore, it is necessary to focus also on techniques enabling clinicians to directly control the motion of the surgical tools. In this work, we address that challenge and present a novel teleoperated robotic system able to steer flexible needles. The proposed system tracks the position of the needle using an ultrasound imaging system and computes needle's ideal position and orientation to reach a given target. The master haptic interface then provides the clinician with mixed kinesthetic-vibratory navigation cues to guide the needle toward the computed ideal position and orientation. Twenty participants carried out an experiment of teleoperated needle insertion into a soft-tissue phantom, considering four different experimental conditions. Participants were provided with either mixed kinesthetic-vibratory feedback or mixed kinesthetic-visual feedback. Moreover, we considered two different ways of computing ideal position and orientation of the needle: with or without set-points. Vibratory feedback was found more effective than visual feedback in conveying navigation cues, with a mean targeting error of 0.72 mm when using set-points, and of 1.10 mm without set-points.

  18. Coordinating teams of autonomous vehicles: an architectural perspective

    NASA Astrophysics Data System (ADS)

    Czichon, Cary; Peterson, Robert W.; Mettala, Erik G.; Vondrak, Ivo

    2005-05-01

    In defense-related robotics research, a mission level integration gap exists between mission tasks (tactical) performed by ground, sea, or air applications and elementary behaviors enacted by processing, communications, sensors, and weaponry resources (platform specific). The gap spans ensemble (heterogeneous team) behaviors, automatic MOE/MOP tracking, and tactical task modeling/simulation for virtual and mixed teams comprised of robotic and human combatants. This study surveys robotic system architectures, compares approaches for navigating problem/state spaces by autonomous systems, describes an architecture for an integrated, repository-based modeling, simulation, and execution environment, and outlines a multi-tiered scheme for robotic behavior components that is agent-based, platform-independent, and extendable via plug-ins. Tools for this integrated environment, along with a distributed agent framework for collaborative task performance are being developed by a U.S. Army funded SBIR project (RDECOM Contract N61339-04-C-0005).

  19. Vision based object pose estimation for mobile robots

    NASA Technical Reports Server (NTRS)

    Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry

    1994-01-01

    Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.

  20. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1994-06-28

    An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures.

  1. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.

    1994-01-01

    An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.

  2. Telecommunications, navigation and information management concept overview for the Space Exploration Initiative program

    NASA Technical Reports Server (NTRS)

    Bell, Jerome A.; Stephens, Elaine; Barton, Gregg

    1991-01-01

    An overview is provided of the Space Exploration Initiative (SEI) concepts for telecommunications, information systems, and navigation (TISN), and engineering and architecture issues are discussed. The SEI program data system is reviewed to identify mission TISN interfaces, and reference TISN concepts are described for nominal, degraded, and mission-critical data services. The infrastructures reviewed include telecommunications for robotics support, autonomous navigation without earth-based support, and information networks for tracking and data acquisition. Four options for TISN support architectures are examined which relate to unique SEI exploration strategies. Detailed support estimates are given for: (1) a manned stay on Mars; (2) permanent lunar and Martian settlements; short-duration missions; and (4) systematic exploration of the moon and Mars.

  3. Human-Robot Interaction Directed Research Project

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, Ernest V., II; Chang, M. L.

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot. HRP GAPS This HRI research contributes to closure of HRP gaps by providing information on how display and control characteristics - those related to guidance, feedback, and command modalities - affect operator performance. The overarching goals are to improve interface usability, reduce operator error, and develop candidate guidelines to design effective human-robot interfaces.

  4. Selected topics in robotics for space exploration

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C. (Editor); Kaufman, Howard (Editor)

    1993-01-01

    Papers and abstracts included represent both formal presentations and experimental demonstrations at the Workshop on Selected Topics in Robotics for Space Exploration which took place at NASA Langley Research Center, 17-18 March 1993. The workshop was cosponsored by the Guidance, Navigation, and Control Technical Committee of the NASA Langley Research Center and the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) at RPI, Troy, NY. Participation was from industry, government, and other universities with close ties to either Langley Research Center or to CIRSSE. The presentations were very broad in scope with attention given to space assembly, space exploration, flexible structure control, and telerobotics.

  5. Experiments with a small behaviour controlled planetary rover

    NASA Technical Reports Server (NTRS)

    Miller, David P.; Desai, Rajiv S.; Gat, Erann; Ivlev, Robert; Loch, John

    1993-01-01

    A series of experiments that were performed on the Rocky 3 robot is described. Rocky 3 is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and depositing the sample in a container at its home base. The robot is programmed according to a reactive behavior control paradigm using the ALFA programming language. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mobile robot control systems. The code for Rocky 3 runs on an eight bit processor and uses about ten k of memory.

  6. Flexible robotic catheters in the visceral segment of the aorta: advantages and limitations.

    PubMed

    Li, Mimi M; Hamady, Mohamad S; Bicknell, Colin D; Riga, Celia V

    2018-06-01

    Flexible robotic catheters are an emerging technology which provide an elegant solution to the challenges of conventional endovascular intervention. Originally developed for interventional cardiology and electrophysiology procedures, remotely steerable robotic catheters such as the Magellan system enable greater precision and enhanced stability during target vessel navigation. These technical advantages facilitate improved treatment of disease in the arterial tree, as well as allowing execution of otherwise unfeasible procedures. Occupational radiation exposure is an emerging concern with the use of increasingly complex endovascular interventions. The robotic systems offer an added benefit of radiation reduction, as the operator is seated away from the radiation source during manipulation of the catheter. Pre-clinical studies have demonstrated reduction in force and frequency of vessel wall contact, resulting in reduced tissue trauma, as well as improved procedural times. Both safety and feasibility have been demonstrated in early clinical reports, with the first robot-assisted fenestrated endovascular aortic repair in 2013. Following from this, the Magellan system has been used to successfully undertake a variety of complex aortic procedures, including fenestrated/branched endovascular aortic repair, embolization, and angioplasty.

  7. Novel Selective Detection Method of Tumor Angiogenesis Factors Using Living Nano-Robots.

    PubMed

    Al-Fandi, Mohamed; Alshraiedeh, Nida; Owies, Rami; Alshdaifat, Hala; Al-Mahaseneh, Omamah; Al-Tall, Khadijah; Alawneh, Rawan

    2017-07-14

    This paper reports a novel self-detection method for tumor cells using living nano-robots. These living robots are a nonpathogenic strain of E. coli bacteria equipped with naturally synthesized bio-nano-sensory systems that have an affinity to VEGF, an angiogenic factor overly-expressed by cancer cells. The VEGF-affinity/chemotaxis was assessed using several assays including the capillary chemotaxis assay, chemotaxis assay on soft agar, and chemotaxis assay on solid agar. In addition, a microfluidic device was developed to possibly discover tumor cells through the overexpressed vascular endothelial growth factor (VEGF). Various experiments to study the sensing characteristic of the nano-robots presented a strong response toward the VEGF. Thus, a new paradigm of selective targeting therapies for cancer can be advanced using swimming E. coli as self-navigator miniaturized robots as well as drug-delivery vehicles.

  8. Robotics and neurosurgery.

    PubMed

    Nathoo, Narendra; Pesek, Todd; Barnett, Gene H

    2003-12-01

    Ultimately, neurosurgery performed via a robotic interface will serve to improve the standard of a neurosurgeon's skills, thus making a good surgeon a better surgeon. In fact, computer and robotic instrumentation will become allies to the neurosurgeon through the use of these technologies in training, diagnostic, and surgical events. Nonetheless, these technologies are still in an early stage of development, and each device developed will entail its own set of challenges and limitations for use in clinical settings. The future operating room should be regarded as an integrated information system incorporating robotic surgical navigators and telecontrolled micromanipulators, with the capabilities of all principal neurosurgical concepts, sharing information, and under the control of a single person, the neurosurgeon. The eventual integration of robotic technology into mainstream clinical neurosurgery offers the promise of a future of safer, more accurate, and less invasive surgery that will result in improved patient outcome.

  9. Event detection and localization for small mobile robots using reservoir computing.

    PubMed

    Antonelo, E A; Schrauwen, B; Stroobandt, D

    2008-08-01

    Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the input stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.

  10. Research into Kinect/Inertial Measurement Units Based on Indoor Robots.

    PubMed

    Li, Huixia; Wen, Xi; Guo, Hang; Yu, Min

    2018-03-12

    As indoor mobile navigation suffers from low positioning accuracy and accumulation error, we carried out research into an integrated location system for a robot based on Kinect and an Inertial Measurement Unit (IMU). In this paper, the close-range stereo images are used to calculate the attitude information and the translation amount of the adjacent positions of the robot by means of the absolute orientation algorithm, for improving the calculation accuracy of the robot's movement. Relying on the Kinect visual measurement and the strap-down IMU devices, we also use Kalman filtering to obtain the errors of the position and attitude outputs, in order to seek the optimal estimation and correct the errors. Experimental results show that the proposed method is able to improve the positioning accuracy and stability of the indoor mobile robot.

  11. Wireless Cortical Brain-Machine Interface for Whole-Body Navigation in Primates

    NASA Astrophysics Data System (ADS)

    Rajangam, Sankaranarayani; Tseng, Po-He; Yin, Allen; Lehew, Gary; Schwarz, David; Lebedev, Mikhail A.; Nicolelis, Miguel A. L.

    2016-03-01

    Several groups have developed brain-machine-interfaces (BMIs) that allow primates to use cortical activity to control artificial limbs. Yet, it remains unknown whether cortical ensembles could represent the kinematics of whole-body navigation and be used to operate a BMI that moves a wheelchair continuously in space. Here we show that rhesus monkeys can learn to navigate a robotic wheelchair, using their cortical activity as the main control signal. Two monkeys were chronically implanted with multichannel microelectrode arrays that allowed wireless recordings from ensembles of premotor and sensorimotor cortical neurons. Initially, while monkeys remained seated in the robotic wheelchair, passive navigation was employed to train a linear decoder to extract 2D wheelchair kinematics from cortical activity. Next, monkeys employed the wireless BMI to translate their cortical activity into the robotic wheelchair’s translational and rotational velocities. Over time, monkeys improved their ability to navigate the wheelchair toward the location of a grape reward. The navigation was enacted by populations of cortical neurons tuned to whole-body displacement. During practice with the apparatus, we also noticed the presence of a cortical representation of the distance to reward location. These results demonstrate that intracranial BMIs could restore whole-body mobility to severely paralyzed patients in the future.

  12. Proposal for continued research in intelligent machines at the Center for Engineering Systems Advanced Research (CESAR) for FY 1988 to FY 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisbin, C.R.

    1987-03-01

    This document reviews research accomplishments achieved by the staff of the Center for Engineering Systems Advanced Research (CESAR) during the fiscal years 1984 through 1987. The manuscript also describes future CESAR objectives for the 1988-1991 planning horizon, and beyond. As much as possible, the basic research goals are derived from perceived Department of Energy (DOE) needs for increased safety, productivity, and competitiveness in the United States energy producing and consuming facilities. Research areas covered include the HERMIES-II Robot, autonomous robot navigation, hypercube computers, machine vision, and manipulators.

  13. Robotics and Virtual Reality for Cultural Heritage Digitization and Fruition

    NASA Astrophysics Data System (ADS)

    Calisi, D.; Cottefoglie, F.; D'Agostini, L.; Giannone, F.; Nenci, F.; Salonia, P.; Zaratti, M.; Ziparo, V. A.

    2017-05-01

    In this paper we present our novel approach for acquiring and managing digital models of archaeological sites, and the visualization techniques used to showcase them. In particular, we will demonstrate two technologies: our robotic system for digitization of archaeological sites (DigiRo) result of over three years of efforts by a group of cultural heritage experts, computer scientists and roboticists, and our cloud-based archaeological information system (ARIS). Finally we describe the viewers we developed to inspect and navigate the 3D models: a viewer for the web (ROVINA Web Viewer) and an immersive viewer for Virtual Reality (ROVINA VR Viewer).

  14. Underwater Multi-Vehicle Trajectory Alignment and Mapping Using Acoustic and Optical Constraints

    PubMed Central

    Campos, Ricard; Gracias, Nuno; Ridao, Pere

    2016-01-01

    Multi-robot formations are an important advance in recent robotic developments, as they allow a group of robots to merge their capacities and perform surveys in a more convenient way. With the aim of keeping the costs and acoustic communications to a minimum, cooperative navigation of multiple underwater vehicles is usually performed at the control level. In order to maintain the desired formation, individual robots just react to simple control directives extracted from range measurements or ultra-short baseline (USBL) systems. Thus, the robots are unaware of their global positioning, which presents a problem for the further processing of the collected data. The aim of this paper is two-fold. First, we present a global alignment method to correct the dead reckoning trajectories of multiple vehicles to resemble the paths followed during the mission using the acoustic messages passed between vehicles. Second, we focus on the optical mapping application of these types of formations and extend the optimization framework to allow for multi-vehicle geo-referenced optical 3D mapping using monocular cameras. The inclusion of optical constraints is not performed using the common bundle adjustment techniques, but in a form improving the computational efficiency of the resulting optimization problem and presenting a generic process to fuse optical reconstructions with navigation data. We show the performance of the proposed method on real datasets collected within the Morph EU-FP7 project. PMID:26999144

  15. Vision-based mapping with cooperative robots

    NASA Astrophysics Data System (ADS)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  16. Adaptive multisensor fusion for planetary exploration rovers

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri

    1992-01-01

    The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.

  17. Development Of Autonomous Systems

    NASA Astrophysics Data System (ADS)

    Kanade, Takeo

    1989-03-01

    In the last several years at the Robotics Institute of Carnegie Mellon University, we have been working on two projects for developing autonomous systems: Nablab for Autonomous Land Vehicle and Ambler for Mars Rover. These two systems are for different purposes: the Navlab is a four-wheeled vehicle (van) for road and open terrain navigation, and the Ambler is a six-legged locomotor for Mars exploration. The two projects, however, share many common aspects. Both are large-scale integrated systems for navigation. In addition to the development of individual components (eg., construction and control of the vehicle, vision and perception, and planning), integration of those component technologies into a system by means of an appropriate architecture is a major issue.

  18. An overview on real-time control schemes for wheeled mobile robot

    NASA Astrophysics Data System (ADS)

    Radzak, M. S. A.; Ali, M. A. H.; Sha’amri, S.; Azwan, A. R.

    2018-04-01

    The purpose of this paper is to review real-time control motion algorithms for wheeled mobile robot (WMR) when navigating in environment such as road. Its need a good controller to avoid collision with any disturbance and maintain a track error at zero level. The controllers are used with other aiding sensors to measure the WMR’s velocities, posture, and interference to estimate the required torque to be applied on the wheels of mobile robot. Four main categories for wheeled mobile robot control systems have been found in literature which are namely: Kinematic based controller, Dynamic based controllers, artificial intelligence based control system, and Active Force control. A MATLAB/Simulink software is the main software to simulate and implement the control system. The real-time toolbox in MATLAB/SIMULINK are used to receive/send data from sensors/to actuator with presence of disturbances, however others software such C, C++ and visual basic are rare to be used.

  19. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    PubMed Central

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  20. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering.

    PubMed

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-05-23

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.

  1. Free-Flight Terrestrial Rocket Lander Demonstration for NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) System

    NASA Technical Reports Server (NTRS)

    Rutishauser, David K.; Epp, Chirold; Robertson, Ed

    2012-01-01

    The Autonomous Landing Hazard Avoidance Technology (ALHAT) Project is chartered to develop and mature to a Technology Readiness Level (TRL) of six an autonomous system combining guidance, navigation and control with terrain sensing and recognition functions for crewed, cargo, and robotic planetary landing vehicles. The ALHAT System must be capable of identifying and avoiding surface hazards to enable a safe and accurate landing to within tens of meters of designated and certified landing sites anywhere on a planetary surface under any lighting conditions. Since its inception in 2006, the ALHAT Project has executed four field test campaigns to characterize and mature sensors and algorithms that support real-time hazard detection and global/local precision navigation for planetary landings. The driving objective for Government Fiscal Year 2012 (GFY2012) is to successfully demonstrate autonomous, real-time, closed loop operation of the ALHAT system in a realistic free flight scenario on Earth using the Morpheus lander developed at the Johnson Space Center (JSC). This goal represents an aggressive target consistent with a lean engineering culture of rapid prototyping and development. This culture is characterized by prioritizing early implementation to gain practical lessons learned and then building on this knowledge with subsequent prototyping design cycles of increasing complexity culminating in the implementation of the baseline design. This paper provides an overview of the ALHAT/Morpheus flight demonstration activities in GFY2012, including accomplishments, current status, results, and lessons learned. The ALHAT/Morpheus effort is also described in the context of a technology path in support of future crewed and robotic planetary exploration missions based upon the core sensing functions of the ALHAT system: Terrain Relative Navigation (TRN), Hazard Detection and Avoidance (HDA), and Hazard Relative Navigation (HRN).

  2. The Evolution of Deep Space Navigation: 1989-1999

    NASA Technical Reports Server (NTRS)

    Wood, Lincoln J.

    2008-01-01

    The exploration of the planets of the solar system using robotic vehicles has been underway since the early 1960s. During this time the navigational capabilities employed have increased greatly in accuracy, as required by the scientific objectives of the missions and as enabled by improvements in technology. This paper is the second in a chronological sequence dealing with the evolution of deep space navigation. The time interval covered extends from the 1989 launch of the Magellan spacecraft to Venus through a multiplicity of planetary exploration activities in 1999. The paper focuses on the observational techniques that have been used to obtain navigational information, propellant-efficient means for modifying spacecraft trajectories, and the computational methods that have been employed, tracing their evolution through a dozen planetary missions.

  3. Embedded Relative Navigation Sensor Fusion Algorithms for Autonomous Rendezvous and Docking Missions

    NASA Technical Reports Server (NTRS)

    DeKock, Brandon K.; Betts, Kevin M.; McDuffie, James H.; Dreas, Christine B.

    2008-01-01

    bd Systems (a subsidiary of SAIC) has developed a suite of embedded relative navigation sensor fusion algorithms to enable NASA autonomous rendezvous and docking (AR&D) missions. Translational and rotational Extended Kalman Filters (EKFs) were developed for integrating measurements based on the vehicles' orbital mechanics and high-fidelity sensor error models and provide a solution with increased accuracy and robustness relative to any single relative navigation sensor. The filters were tested tinough stand-alone covariance analysis, closed-loop testing with a high-fidelity multi-body orbital simulation, and hardware-in-the-loop (HWIL) testing in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL).

  4. Progress in Insect-Inspired Optical Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Chahl, Javaan; Zometzer, Steve

    2005-01-01

    Progress has been made in continuing efforts to develop optical flight-control and navigation sensors for miniature robotic aircraft. The designs of these sensors are inspired by the designs and functions of the vision systems and brains of insects. Two types of sensors of particular interest are polarization compasses and ocellar horizon sensors. The basic principle of polarization compasses was described (but without using the term "polarization compass") in "Insect-Inspired Flight Control for Small Flying Robots" (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate: Bees use sky polarization patterns in ultraviolet (UV) light, caused by Rayleigh scattering of sunlight by atmospheric gas molecules, as direction references relative to the apparent position of the Sun. A robotic direction-finding technique based on this concept would be more robust in comparison with a technique based on the direction to the visible Sun because the UV polarization pattern is distributed across the entire sky and, hence, is redundant and can be extrapolated from a small region of clear sky in an elsewhere cloudy sky that hides the Sun.

  5. Research in free-flying robots and flexible manipulators at the Stanford Aerospace Robotics Laboratory

    NASA Technical Reports Server (NTRS)

    Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.; Wilson, E.

    1993-01-01

    Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modeling and control of extremely flexible space structures.

  6. Automation and robotics technology for intelligent mining systems

    NASA Technical Reports Server (NTRS)

    Welsh, Jeffrey H.

    1989-01-01

    The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.

  7. Guidance and control 1991; Proceedings of the Annual Rocky Mountain Guidance and Control Conference, Keystone, CO, Feb. 2-6, 1991

    NASA Astrophysics Data System (ADS)

    Culp, Robert D.; McQuerry, James P.

    1991-07-01

    The present conference on guidance and control encompasses advances in guidance, navigation, and control, storyboard displays, approaches to space-borne pointing control, international space programs, recent experiences with systems, and issues regarding navigation in the low-earth-orbit space environment. Specific issues addressed include a scalable architecture for an operational spaceborne autonavigation system, the mitigation of multipath error in GPS-based attitude determination, microgravity flight testing of a laboratory robot, and the application of neural networks. Other issues addressed include image navigation with second-generation Meteosat, Magellan star-scanner experiences, high-precision control systems for telescopes and interferometers, gravitational effects on low-earth orbiters, experimental verification of nanometer-level optical pathlengths, and a flight telerobotic servicer prototype simulator. (For individual items see A93-15577 to A93-15613)

  8. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Deputy Administrator Lori Garver, left, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  9. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Deputy Administrator Lori Garver, right, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  10. Towards Commanding Unmanned Ground Vehicle Movement in Unfamiliar Environments Using Unconstrained English: Initial Research Results

    DTIC Science & Technology

    2007-06-01

    constrained list of command words could be valuable in many systems, as would the ability of driverless vehicles to navigate through a route...Sensemaking in UGVs • Future Combat Systems UGV roles – Driverless trucks – Robotic mules (soldier, squad aid) – Intelligent munitions – And more! • Some

  11. A Case Study on a Capsule Robot in the Gastrointestinal Tract to Teach Robot Programming and Navigation

    ERIC Educational Resources Information Center

    Guo, Yi; Zhang, Shubo; Ritter, Arthur; Man, Hong

    2014-01-01

    Despite the increasing importance of robotics, there is a significant challenge involved in teaching this to undergraduate students in biomedical engineering (BME) and other related disciplines in which robotics techniques could be readily applied. This paper addresses this challenge through the development and pilot testing of a bio-microrobotics…

  12. Draper Laboratory small autonomous aerial vehicle

    NASA Astrophysics Data System (ADS)

    DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.

    1997-06-01

    The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.

  13. Navigation of military and space unmanned ground vehicles in unstructured terrains

    NASA Technical Reports Server (NTRS)

    Lescoe, Paul; Lavery, David; Bedard, Roger

    1991-01-01

    Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.

  14. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    PubMed

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  15. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    PubMed Central

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P.

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394

  16. Usability testing of a mobile robotic system for in-home telerehabilitation.

    PubMed

    Boissy, Patrick; Brière, Simon; Corriveau, Hélène; Grant, Andrew; Lauria, Michel; Michaud, François

    2011-01-01

    Mobile robots designed to enhance telepresence in the support of telehealth services are being considered for numerous applications. TELEROBOT is a teleoperated mobile robotic platform equipped with videoconferencingcapabilities and designed to be used in a home environment to. In this study, learnability of the system's teleoperation interface and controls was evaluated with ten rehabilitation professionals during four training sessions in a laboratory environment and in an unknown home environment while performing the execution of a standardized evaluation protocol typically used in home care. Results show that the novice teleoperators' performances on two of the four metrics used (number of command and total time) improved significantly across training sessions (ANOVAS, p<0.05) and that performance in these metrics in the last training session reflected teleoperation abilities seen in the unknown home environment during navigation tasks (r=0,77 and 0,60). With only 4 hours of training, rehabilitation professionals were able learn to teleoperate successfully TELEROBOT. However teleoperation performances remained significantly less efficient then those of an expert. Under the home task condition (navigating the home environment from one point to the other as fast as possible) this translated to completion time between 350 seconds (best performance) and 850 seconds (worse performance). Improvements in other usability aspects of the system will be needed to meet the requirements of in-home telerehabilitation.

  17. Small Body Exploration Technologies as Precursors for Interstellar Robotics

    NASA Astrophysics Data System (ADS)

    Noble, R. J.; Sykes, M. V.

    The scientific activities undertaken to explore our Solar System will be very similar to those required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution, as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of the technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.

  18. Small Body Exploration Technologies as Precursors for Interstellar Robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Robert; /SLAC; Sykes, Mark V.

    The scientific activities undertaken to explore our Solar System will be the same as required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of themore » technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.« less

  19. Constrained navigation for unmanned systems

    NASA Astrophysics Data System (ADS)

    Vasseur, Laurent; Gosset, Philippe; Carpentier, Luc; Marion, Vincent; Morillon, Joel G.; Ropars, Patrice

    2005-05-01

    The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales as the prime contractor, focuses on about 15 robotic themes which can provide an immediate "operational add-on value". The paper details the "constrained navigation" study (named TEL2), which main goal is to identify and test a well-balanced task sharing between man and machine to accomplish a robotic task that cannot be performed autonomously at the moment because of technological limitations. The chosen function is "obstacle avoidance" on rough ground and quite high speed (40 km/h). State of the art algorithms have been implemented to perform autonomous obstacle avoidance and following of forest borders, using scanner laser sensor and standard localization functions. Such an "obstacle avoidance" function works well most of the time, BUT fails sometimes. The study analyzed how the remote operator can manage such failures so that the system remains fully operationally reliable; he can act according to two ways: a) finely adjust the vehicle current heading; b) take the control of the vehicle "on the fly" (without stopping) and bring it back to autonomous behavior when motion is secured again. The paper also presents the results got from the military acceptance tests performed on French 4x4 DARDS ATD.

  20. Primate-inspired vehicle navigation using optic flow and mental rotations

    NASA Astrophysics Data System (ADS)

    Arkin, Ronald C.; Dellaert, Frank; Srinivasan, Natesh; Kerwin, Ryan

    2013-05-01

    Robot navigation already has many relatively efficient solutions: reactive control, simultaneous localization and mapping (SLAM), Rapidly-Exploring Random Trees (RRTs), etc. But many primates possess an additional inherent spatial reasoning capability: mental rotation. Our research addresses the question of what role, if any, mental rotations can play in enhancing existing robot navigational capabilities. To answer this question we explore the use of optical flow as a basis for extracting abstract representations of the world, comparing these representations with a goal state of similar format and then iteratively providing a control signal to a robot to allow it to move in a direction consistent with achieving that goal state. We study a range of transformation methods to implement the mental rotation component of the architecture, including correlation and matching based on cognitive studies. We also include a discussion of how mental rotations may play a key role in understanding spatial advice giving, particularly from other members of the species, whether in map-based format, gestures, or other means of communication. Results to date are presented on our robotic platform.

  1. Novel Selective Detection Method of Tumor Angiogenesis Factors Using Living Nano-Robots

    PubMed Central

    Alshraiedeh, Nida; Owies, Rami; Alshdaifat, Hala; Al-Mahaseneh, Omamah; Al-Tall, Khadijah; Alawneh, Rawan

    2017-01-01

    This paper reports a novel self-detection method for tumor cells using living nano-robots. These living robots are a nonpathogenic strain of E. coli bacteria equipped with naturally synthesized bio-nano-sensory systems that have an affinity to VEGF, an angiogenic factor overly-expressed by cancer cells. The VEGF-affinity/chemotaxis was assessed using several assays including the capillary chemotaxis assay, chemotaxis assay on soft agar, and chemotaxis assay on solid agar. In addition, a microfluidic device was developed to possibly discover tumor cells through the overexpressed vascular endothelial growth factor (VEGF). Various experiments to study the sensing characteristic of the nano-robots presented a strong response toward the VEGF. Thus, a new paradigm of selective targeting therapies for cancer can be advanced using swimming E. coli as self-navigator miniaturized robots as well as drug-delivery vehicles. PMID:28708066

  2. Wheelchair-mounted robotic arm to hold and move a communication device - final design.

    PubMed

    Barrett, Graham; Kurley, Kyle; Brauchie, Casey; Morton, Scott; Barrett, Steven

    2015-01-01

    At the 51st Rocky Mountain Bioengineering Symposium we presented a preliminary design for a robotic arm to assist an individual living within an assistive technology smart home. The individual controls much of their environment with a Dynavox Maestro communication device. However, the device obstructs the individual’s line of site when navigating about the smart home. A robotic arm was developed to move the communication device in and out of the user’s field of view as desired. The robotic arm is controlled by a conveniently mounted jelly switch. The jelly switch sends control signals to a four state (up, off, down, off) single-axis robotic arm interfaced to a DC motor by high power electronic relays. This paper describes the system, control circuitry, and multiple safety features. The arm will be delivered for use later in 2015.

  3. Improving robot arm control for safe and robust haptic cooperation in orthopaedic procedures.

    PubMed

    Cruces, R A Castillo; Wahrburg, J

    2007-12-01

    This paper presents the ongoing results of an effort to achieve the integration of a navigated cooperative robotic arm into computer-assisted orthopaedic surgery. A seamless integration requires the system acting in direct cooperation with the surgeon instead of replacing him. Two technical issues are discussed to improve the haptic operating modes for interactive robot guidance. The concept of virtual fixtures is used to restrict the range of motion of the robot according to pre-operatively defined constraints, and methodologies to assure a robust and accurate motion through singular arm configurations are investigated. A new method for handling singularities is proposed, which is superior to the commonly used damped-least-squares method. It produces no deviations of the end-effector in relation to the virtually constrained path. A solution to assure a good performance of a hands-on robotic arm at singularity configurations is proposed. (c) 2007 John Wiley & Sons, Ltd.

  4. Soldier-Robot Team Communication: An Investigation of Exogenous Orienting Visual Display Cues and Robot Reporting Preferences

    DTIC Science & Technology

    2018-02-12

    usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4

  5. Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion.

    PubMed

    Volonté, Francesco; Pugin, François; Bucher, Pascal; Sugimoto, Maki; Ratib, Osman; Morel, Philippe

    2011-07-01

    New technologies can considerably improve preoperative planning, enhance the surgeon's skill and simplify the approach to complex procedures. Augmented reality techniques, robot assisted operations and computer assisted navigation tools will become increasingly important in surgery and in residents' education. We obtained 3D reconstructions from simple spiral computed tomography (CT) slides using OsiriX, an open source processing software package dedicated to DICOM images. These images were then projected on the patient's body with a beamer fixed to the operating table to enhance spatial perception during surgical intervention (augmented reality). Changing a window's deepness level allowed the surgeon to navigate through the patient's anatomy, highlighting regions of interest and marked pathologies. We used image overlay navigation for laparoscopic operations such cholecystectomy, abdominal exploration, distal pancreas resection and robotic liver resection. Augmented reality techniques will transform the behaviour of surgeons, making surgical interventions easier, faster and probably safer. These new techniques will also renew methods of surgical teaching, facilitating transmission of knowledge and skill to young surgeons.

  6. Virtual modeling of robot-assisted manipulations in abdominal surgery.

    PubMed

    Berelavichus, Stanislav V; Karmazanovsky, Grigory G; Shirokov, Vadim S; Kubyshkin, Valeriy A; Kriger, Andrey G; Kondratyev, Evgeny V; Zakharova, Olga P

    2012-06-27

    To determine the effectiveness of using multidetector computed tomography (MDCT) data in preoperative planning of robot-assisted surgery. Fourteen patients indicated for surgery underwent MDCT using 64 and 256-slice MDCT. Before the examination, a specially constructed navigation net was placed on the patient's anterior abdominal wall. Processing of MDCT data was performed on a Brilliance Workspace 4 (Philips). Virtual vectors that imitate robotic and assistant ports were placed on the anterior abdominal wall of the 3D model of the patient, considering the individual anatomy of the patient and the technical capabilities of robotic arms. Sites for location of the ports were directed by projection on the roentgen-positive tags of the navigation net. There were no complications observed during surgery or in the post-operative period. We were able to reduce robotic arm interference during surgery. The surgical area was optimal for robotic and assistant manipulators without any need for reinstallation of the trocars. This method allows modeling of the main steps in robot-assisted intervention, optimizing operation of the manipulator and lowering the risk of injuries to internal organs.

  7. Obstacle Avoidance On Roadways Using Range Data

    NASA Astrophysics Data System (ADS)

    Dunlay, R. Terry; Morgenthaler, David G.

    1987-02-01

    This report describes range data based obstacle avoidance techniques developed for use on an autonomous road-following robot vehicle. The purpose of these techniques is to detect and locate obstacles present in a road environment for navigation of a robot vehicle equipped with an active laser-based range sensor. Techniques are presented for obstacle detection, obstacle location, and coordinate transformations needed in the construction of Scene Models (symbolic structures representing the 3-D obstacle boundaries used by the vehicle's Navigator for path planning). These techniques have been successfully tested on an outdoor robotic vehicle, the Autonomous Land Vehicle (ALV), at speeds up to 3.5 km/hour.

  8. Collision-free motion planning for fiber positioner robots: discretization of velocity profiles

    NASA Astrophysics Data System (ADS)

    Makarem, Laleh; Kneib, Jean-Paul; Gillet, Denis; Bleuler, Hannes; Bouri, Mohamed; Hörler, Philippe; Jenni, Laurent; Prada, Francisco; Sánchez, Justo

    2014-07-01

    The next generation of large-scale spectroscopic survey experiments such as DESI, will use thousands of fiber positioner robots packed on a focal plate. In order to maximize the observing time with this robotic system we need to move in parallel the fiber-ends of all positioners from the previous to the next target coordinates. Direct trajectories are not feasible due to collision risks that could undeniably damage the robots and impact the survey operation and performance. We have previously developed a motion planning method based on a novel decentralized navigation function for collision-free coordination of fiber positioners. The navigation function takes into account the configuration of positioners as well as their envelope constraints. The motion planning scheme has linear complexity and short motion duration (2.5 seconds with the maximum speed of 30 rpm for the positioner), which is independent of the number of positioners. These two key advantages of the decentralization designate the method as a promising solution for the collision-free motion-planning problem in the next-generation of fiber-fed spectrographs. In a framework where a centralized computer communicates with the positioner robots, communication overhead can be reduced significantly by using velocity profiles consisting of a few bits only. We present here the discretization of velocity profiles to ensure the feasibility of a real-time coordination for a large number of positioners. The modified motion planning method that generates piecewise linearized position profiles guarantees collision-free trajectories for all the robots. The velocity profiles fit few bits at the expense of higher computational costs.

  9. Progress in the development of shallow-water mapping systems

    USGS Publications Warehouse

    Bergeron, E.; Worley, C.R.; O'Brien, T.

    2007-01-01

    The USGS (US Geological Survey) Coastal and Marine Geology has deployed an advance autonomous shallow-draft robotic vehicle, Iris, for shallow-water mapping in Apalachicola Bay, Florida. The vehicle incorporates a side scan sonar system, seismic-reflection profiler, single-beam echosounder, and global positioning system (GPS) navigation. It is equipped with an onboard microprocessor-based motor controller, delivering signals for speed and steering to hull-mounted brushless direct-current thrusters. An onboard motion sensor in the Sea Robotics vehicle control system enclosure has been integrated in the vehicle to measure the vehicle heave, pitch, roll, and heading. Three water-tight enclosures are mounted along the vehicle axis for the Edgetech computer and electronics system including the Sea Robotics computer, a control and wireless communications system, and a Thales ZXW real-time kinematic (RTK) GPS receiver. The vehicle has resulted in producing high-quality seismic reflection and side scan sonar data, which will help in developing the baseline oyster habitat maps.

  10. Towards a new modality-independent interface for a robotic wheelchair.

    PubMed

    Bastos-Filho, Teodiano Freire; Cheein, Fernando Auat; Müller, Sandra Mara Torres; Celeste, Wanderley Cardoso; de la Cruz, Celso; Cavalieri, Daniel Cruz; Sarcinelli-Filho, Mário; Amaral, Paulo Faria Santos; Perez, Elisa; Soria, Carlos Miguel; Carelli, Ricardo

    2014-05-01

    This work presents the development of a robotic wheelchair that can be commanded by users in a supervised way or by a fully automatic unsupervised navigation system. It provides flexibility to choose different modalities to command the wheelchair, in addition to be suitable for people with different levels of disabilities. Users can command the wheelchair based on their eye blinks, eye movements, head movements, by sip-and-puff and through brain signals. The wheelchair can also operate like an auto-guided vehicle, following metallic tapes, or in an autonomous way. The system is provided with an easy to use and flexible graphical user interface onboard a personal digital assistant, which is used to allow users to choose commands to be sent to the robotic wheelchair. Several experiments were carried out with people with disabilities, and the results validate the developed system as an assistive tool for people with distinct levels of disability.

  11. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  12. Robust Planning for Autonomous Navigation of Mobile Robots in Unstructured, Dynamic Environments: An LDRD Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    EISLER, G. RICHARD

    This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Robust Planning for Autonomous Navigation of Mobile Robots In Unstructured, Dynamic Environments (AutoNav)''. The project goal was to develop an algorithmic-driven, multi-spectral approach to point-to-point navigation characterized by: segmented on-board trajectory planning, self-contained operation without human support for mission duration, and the development of appropriate sensors and algorithms to navigate unattended. The project was partially successful in achieving gains in sensing, path planning, navigation, and guidance. One of three experimental platforms, the Minimalist Autonomous Testbed, used a repetitive sense-and-re-plan combination to demonstratemore » the majority of elements necessary for autonomous navigation. However, a critical goal for overall success in arbitrary terrain, that of developing a sensor that is able to distinguish true obstacles that need to be avoided as a function of vehicle scale, still needs substantial research to bring to fruition.« less

  13. Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2001-01-01

    The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.

  14. Evaluation of a novel flexible snake robot for endoluminal surgery.

    PubMed

    Patel, Nisha; Seneci, Carlo A; Shang, Jianzhong; Leibrandt, Konrad; Yang, Guang-Zhong; Darzi, Ara; Teare, Julian

    2015-11-01

    Endoluminal therapeutic procedures such as endoscopic submucosal dissection are increasingly attractive given the shift in surgical paradigm towards minimally invasive surgery. This novel three-channel articulated robot was developed to overcome the limitations of the flexible endoscope which poses a number of challenges to endoluminal surgery. The device enables enhanced movement in a restricted workspace, with improved range of motion and with the accuracy required for endoluminal surgery. To evaluate a novel flexible robot for therapeutic endoluminal surgery. Bench-top studies. Research laboratory. Targeting and navigation tasks of the robot were performed to explore the range of motion and retroflexion capabilities. Complex endoluminal tasks such as endoscopic mucosal resection were also simulated. Successful completion, accuracy and time to perform the bench-top tasks were the main outcome measures. The robot ranges of movement, retroflexion and navigation capabilities were demonstrated. The device showed significantly greater accuracy of targeting in a retroflexed position compared to a conventional endoscope. Bench-top study and small study sample. We were able to demonstrate a number of simulated endoscopy tasks such as navigation, targeting, snaring and retroflexion. The improved accuracy of targeting whilst in a difficult configuration is extremely promising and may facilitate endoluminal surgery which has been notoriously challenging with a conventional endoscope.

  15. The impact of goal-oriented task design on neurofeedback learning for brain-computer interface control.

    PubMed

    McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T

    2018-02-01

    Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.

  16. Human leader and robot follower team: correcting leader's position from follower's heading

    NASA Astrophysics Data System (ADS)

    Borenstein, Johann; Thomas, David; Sights, Brandon; Ojeda, Lauro; Bankole, Peter; Fellars, Donald

    2010-04-01

    In multi-agent scenarios, there can be a disparity in the quality of position estimation amongst the various agents. Here, we consider the case of two agents - a leader and a follower - following the same path, in which the follower has a significantly better estimate of position and heading. This may be applicable to many situations, such as a robotic "mule" following a soldier. Another example is that of a convoy, in which only one vehicle (not necessarily the leading one) is instrumented with precision navigation instruments while all other vehicles use lower-precision instruments. We present an algorithm, called Follower-derived Heading Correction (FDHC), which substantially improves estimates of the leader's heading and, subsequently, position. Specifically, FHDC produces a very accurate estimate of heading errors caused by slow-changing errors (e.g., those caused by drift in gyros) of the leader's navigation system and corrects those errors.

  17. Sensor fusion III: 3-D perception and recognition; Proceedings of the Meeting, Boston, MA, Nov. 5-8, 1990

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1991-01-01

    The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.

  18. An Efficient Model-Based Image Understanding Method for an Autonomous Vehicle.

    DTIC Science & Technology

    1997-09-01

    The problem discussed in this dissertation is the development of an efficient method for visual navigation of autonomous vehicles . The approach is to... autonomous vehicles . Thus the new method is implemented as a component of the image-understanding system in the autonomous mobile robot Yamabico-11 at

  19. Gyro and Accelerometer Based Navigation System for a Mobile Autonomous Robot.

    DTIC Science & Technology

    1985-12-02

    special thanks goes to our thesis advisor Dr. Matthew Kabrisky for having the confidence to turn us loose on this project. Additionally, we would...Wordmaster Word Processor 1 Wordstar Word Processor 1 Virtual Devices Robo A 6802 Cross Assembler 1 Modem 720 Communication Program 1 CP/M Operating

  20. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  1. Architecting the Communication and Navigation Networks for NASA's Space Exploration Systems

    NASA Technical Reports Server (NTRS)

    Bhassin, Kul B.; Putt, Chuck; Hayden, Jeffrey; Tseng, Shirley; Biswas, Abi; Kennedy, Brian; Jennings, Esther H.; Miller, Ron A.; Hudiburg, John; Miller, Dave; hide

    2007-01-01

    NASA is planning a series of short and long duration human and robotic missions to explore the Moon and then Mars. A key objective of the missions is to grow, through a series of launches, a system of systems communication, navigation, and timing infrastructure at minimum cost while providing a network-centric infrastructure that maximizes the exploration capabilities and science return. There is a strong need to use architecting processes in the mission pre-formulation stage to describe the systems, interfaces, and interoperability needed to implement multiple space communication systems that are deployed over time, yet support interoperability with each deployment phase and with 20 years of legacy systems. In this paper we present a process for defining the architecture of the communications, navigation, and networks needed to support future space explorers with the best adaptable and evolable network-centric space exploration infrastructure. The process steps presented are: 1) Architecture decomposition, 2) Defining mission systems and their interfaces, 3) Developing the communication, navigation, networking architecture, and 4) Integrating systems, operational and technical views and viewpoints. We demonstrate the process through the architecture development of the communication network for upcoming NASA space exploration missions.

  2. Astrobee Guest Science

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan; Benavides, Jose; Provencher, Chris; Bualat, Maria; Smith, Marion F.; Mora Vargas, Andres

    2017-01-01

    At the end of 2017, Astrobee will launch three free-flying robots that will navigate the entire US segment of the ISS (International Space Station) and serve as a payload facility. These robots will provide guest science payloads with processor resources, space within the robot for physical attachment, power, communication, propulsion, and human interfaces.

  3. Augmented reality user interface for mobile ground robots with manipulator arms

    NASA Astrophysics Data System (ADS)

    Vozar, Steven; Tilbury, Dawn M.

    2011-01-01

    Augmented Reality (AR) is a technology in which real-world visual data is combined with an overlay of computer graphics, enhancing the original feed. AR is an attractive tool for teleoperated UGV UIs as it can improve communication between robots and users via an intuitive spatial and visual dialogue, thereby increasing operator situational awareness. The successful operation of UGVs often relies upon both chassis navigation and manipulator arm control, and since existing literature usually focuses on one task or the other, there is a gap in mobile robot UIs that take advantage of AR for both applications. This work describes the development and analysis of an AR UI system for a UGV with an attached manipulator arm. The system supplements a video feed shown to an operator with information about geometric relationships within the robot task space to improve the operator's situational awareness. Previous studies on AR systems and preliminary analyses indicate that such an implementation of AR for a mobile robot with a manipulator arm is anticipated to improve operator performance. A full user-study can determine if this hypothesis is supported by performing an analysis of variance on common test metrics associated with UGV teleoperation.

  4. A multitasking behavioral control system for the Robotic All Terrain Lunar Exploration Rover (RATLER)

    NASA Technical Reports Server (NTRS)

    Klarer, P.

    1994-01-01

    An alternative methodology for designing an autonomous navigation and control system is discussed. This generalized hybrid system is based on a less sequential and less anthropomorphic approach than that used in the more traditional artificial intelligence (AI) technique. The architecture is designed to allow both synchronous and asynchronous operations between various behavior modules. This is accomplished by intertask communications channels which implement each behavior module and each interconnection node as a stand-alone task. The proposed design architecture allows for construction of hybrid systems which employ both subsumption and traditional AI techniques as well as providing for a teleoperator's interface. Implementation of the architecture is planned for the prototype Robotic All Terrain Lunar Explorer Rover (RATLER) which is described briefly.

  5. Designing speech-based interfaces for telepresence robots for people with disabilities.

    PubMed

    Tsui, Katherine M; Flynn, Kelsey; McHugh, Amelia; Yanco, Holly A; Kontak, David

    2013-06-01

    People with cognitive and/or motor impairments may benefit from using telepresence robots to engage in social activities. To date, these robots, their user interfaces, and their navigation behaviors have not been designed for operation by people with disabilities. We conducted an experiment in which participants (n=12) used a telepresence robot in a scavenger hunt task to determine how they would use speech to command the robot. Based upon the results, we present design guidelines for speech-based interfaces for telepresence robots.

  6. [Force-based local navigation in robot-assisted implantation bed anlage in the lateral skull base. An experimental study].

    PubMed

    Plinkert, P K; Federspil, P A; Plinkert, B; Henrich, D

    2002-03-01

    Excellent precision, miss of retiring, reproducibility are main characteristics of robots in the operating theatre. Because of these facts their use for surgery in the lateral scull base is of great interest. In recent experiments we determined process parameters for robot assisted reaming of a cochlea implant bed and for a mastoidectomy. These results suggested that optimizing parameters for thrilling with the robot is needed. Therefore we implemented a suitable reaming curve from the geometrical data of the implant and a force controlled process control for robot assisted reaming at the lateral scull base. Experiments were performed with an industrial robot on animal and human scull base specimen. Because of online force detection and feedback of sensory data the reaming with the robot was controlled. With increasing force values above a defined limit feed rates were automatically regulated. Furthermore we were able to detect contact of the thrill to dura mater by analyzing the force values. With the new computer program the desired implant bed was exactly prepared. Our examinations showed a successful reaming of an implant bed in the lateral scull base with a robot. Because of a force controlled reaming process locale navigation is possible and enables careful thrilling with a robot.

  7. Bio-inspired Computing for Robots

    NASA Technical Reports Server (NTRS)

    Laufenberg, Larry

    2003-01-01

    Living creatures may provide algorithms to enable active sensing/control systems in robots. Active sensing could enable planetary rovers to feel their way in unknown environments. The surface of Jupiter's moon Europa consists of fractured ice over a liquid sea that may contain microbes similar to those on Earth. To explore such extreme environments, NASA needs robots that autonomously survive, navigate, and gather scientific data. They will be too far away for guidance from Earth. They must sense their environment and control their own movements to avoid obstacles or investigate a science opportunity. To meet this challenge, CICT's Information Technology Strategic Research (ITSR) Project is funding neurobiologists at NASA's Jet Propulsion Laboratory (JPL) and selected universities to search for biologically inspired algorithms that enable robust active sensing and control for exploratory robots. Sources for these algorithms are living creatures, including rats and electric fish.

  8. Localization from Visual Landmarks on a Free-Flying Robot

    NASA Technical Reports Server (NTRS)

    Coltin, Brian; Fusco, Jesse; Moratto, Zack; Alexandrov, Oleg; Nakamura, Robert

    2016-01-01

    We present the localization approach for Astrobee,a new free-flying robot designed to navigate autonomously on board the International Space Station (ISS). Astrobee will conduct experiments in microgravity, as well as assisst astronauts and ground controllers. Astrobee replaces the SPHERES robots which currently operate on the ISS, which were limited to operating in a small cube since their localization system relied on triangulation from ultrasonic transmitters. Astrobee localizes with only monocular vision and an IMU, enabling it to traverse the entire US segment of the station. Features detected on a previously-built map, optical flow information,and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise.Finally, we extensively evaluate the behavior of the filter on atwo-dimensional testing surface.

  9. Autonomous navigation method for substation inspection robot based on travelling deviation

    NASA Astrophysics Data System (ADS)

    Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting

    2017-06-01

    A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.

  10. Accurate multi-robot targeting for keyhole neurosurgery based on external sensor monitoring.

    PubMed

    Comparetti, Mirko Daniele; Vaccarella, Alberto; Dyagilev, Ilya; Shoham, Moshe; Ferrigno, Giancarlo; De Momi, Elena

    2012-05-01

    Robotics has recently been introduced in surgery to improve intervention accuracy, to reduce invasiveness and to allow new surgical procedures. In this framework, the ROBOCAST system is an optically surveyed multi-robot chain aimed at enhancing the accuracy of surgical probe insertion during keyhole neurosurgery procedures. The system encompasses three robots, connected as a multiple kinematic chain (serial and parallel), totalling 13 degrees of freedom, and it is used to automatically align the probe onto a desired planned trajectory. The probe is then inserted in the brain, towards the planned target, by means of a haptic interface. This paper presents a new iterative targeting approach to be used in surgical robotic navigation, where the multi-robot chain is used to align the surgical probe to the planned pose, and an external sensor is used to decrease the alignment errors. The iterative targeting was tested in an operating room environment using a skull phantom, and the targets were selected on magnetic resonance images. The proposed targeting procedure allows about 0.3 mm to be obtained as the residual median Euclidean distance between the planned and the desired targets, thus satisfying the surgical accuracy requirements (1 mm), due to the resolution of the diffused medical images. The performances proved to be independent of the robot optical sensor calibration accuracy.

  11. ATON (Autonomous Terrain-based Optical Navigation) for exploration missions: recent flight test results

    NASA Astrophysics Data System (ADS)

    Theil, S.; Ammann, N.; Andert, F.; Franz, T.; Krüger, H.; Lehner, H.; Lingenauber, M.; Lüdtke, D.; Maass, B.; Paproth, C.; Wohlfeil, J.

    2018-03-01

    Since 2010 the German Aerospace Center is working on the project Autonomous Terrain-based Optical Navigation (ATON). Its objective is the development of technologies which allow autonomous navigation of spacecraft in orbit around and during landing on celestial bodies like the Moon, planets, asteroids and comets. The project developed different image processing techniques and optical navigation methods as well as sensor data fusion. The setup—which is applicable to many exploration missions—consists of an inertial measurement unit, a laser altimeter, a star tracker and one or multiple navigation cameras. In the past years, several milestones have been achieved. It started with the setup of a simulation environment including the detailed simulation of camera images. This was continued by hardware-in-the-loop tests in the Testbed for Robotic Optical Navigation (TRON) where images were generated by real cameras in a simulated downscaled lunar landing scene. Data were recorded in helicopter flight tests and post-processed in real-time to increase maturity of the algorithms and to optimize the software. Recently, two more milestones have been achieved. In late 2016, the whole navigation system setup was flying on an unmanned helicopter while processing all sensor information onboard in real time. For the latest milestone the navigation system was tested in closed-loop on the unmanned helicopter. For that purpose the ATON navigation system provided the navigation state for the guidance and control of the unmanned helicopter replacing the GPS-based standard navigation system. The paper will give an introduction to the ATON project and its concept. The methods and algorithms of ATON are briefly described. The flight test results of the latest two milestones are presented and discussed.

  12. Automating CapCom Using Mobile Agents and Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhaus, Maarten; Alena, Richard L.; Berrios, Daniel; Dowding, John; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail

    2005-01-01

    We have developed and tested an advanced EVA communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. This system, called Mobile Agents (MA), is voice controlled and provides information verbally to the astronauts through programs called personal agents. The system partly automates the role of CapCom in Apollo-including monitoring and managing EVA navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. EVA data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in the context of use, including six years of ethnographic observation of field geology. Our approach is to develop automation that supports the human work practices, allowing people to do what they do well, and to work in ways they are most familiar. Field experiments in Utah have enabled empirically discovering requirements and testing alternative technologies and protocols. This paper reports on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to move and wait in various locations to serve as a relay on the wireless network. The MA system is applicable to many space work situations that involve creating and navigating from maps (including configuring equipment for local topology), interacting with piloted and unpiloted rovers, adapting to environmental conditions, and remote team collaboration involving people and robots.

  13. HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer.

    PubMed

    Adamides, George; Katsanos, Christos; Parmet, Yisrael; Christou, Georgios; Xenos, Michalis; Hadzilacos, Thanasis; Edan, Yael

    2017-07-01

    Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Closing the Loop: Control and Robot Navigation in Wireless Sensor Networks

    DTIC Science & Technology

    2006-09-05

    University of California at Berkeley Technical Report No. UCB/EECS- 2006 -112 http://www.eecs.berkeley.edu/Pubs/TechRpts/ 2006 /EECS- 2006 -112.html September 5... 2006 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1...DATE 05 SEP 2006 2. REPORT TYPE 3. DATES COVERED 00-00- 2006 to 00-00- 2006 4. TITLE AND SUBTITLE Closing the Loop: Control and Robot Navigation in

  15. Image Mapping and Visual Attention on the Sensory Ego-Sphere

    NASA Technical Reports Server (NTRS)

    Fleming, Katherine Achim; Peters, Richard Alan, II

    2012-01-01

    The Sensory Ego-Sphere (SES) is a short-term memory for a robot in the form of an egocentric, tessellated, spherical, sensory-motor map of the robot s locale. Visual attention enables fast alignment of overlapping images without warping or position optimization, since an attentional point (AP) on the composite typically corresponds to one on each of the collocated regions in the images. Such alignment speeds analysis of the multiple images of the area. Compositing and attention were performed two ways and compared: (1) APs were computed directly on the composite and not on the full-resolution images until the time of retrieval; and (2) the attentional operator was applied to all incoming imagery. It was found that although the second method was slower, it produced consistent and, thereby, more useful APs. The SES is an integral part of a control system that will enable a robot to learn new behaviors based on its previous experiences, and that will enable it to recombine its known behaviors in such a way as to solve related, but novel, task problems with apparent creativity. The approach is to combine sensory-motor data association and dimensionality reduction to learn navigation and manipulation tasks as sequences of basic behaviors that can be implemented with a small set of closed-loop controllers. Over time, the aggregate of behaviors and their transition probabilities form a stochastic network. Then given a task, the robot finds a path in the network that leads from its current state to the goal. The SES provides a short-term memory for the cognitive functions of the robot, association of sensory and motor data via spatio-temporal coincidence, direction of the attention of the robot, navigation through spatial localization with respect to known or discovered landmarks, and structured data sharing between the robot and human team members, the individuals in multi-robot teams, or with a C3 center.

  16. Control of an automated mobile manipulator using artificial immune system

    NASA Astrophysics Data System (ADS)

    Deepak, B. B. V. L.; Parhi, Dayal R.

    2016-03-01

    This paper addresses the coordination and control of a wheeled mobile manipulator (WMM) using artificial immune system. The aim of the developed methodology is to navigate the system autonomously and transport jobs and tools in manufacturing environments. This study integrates the kinematic structures of a four-axis manipulator and a differential wheeled mobile platform. The motion of the developed WMM is controlled by the complete system of parametric equation in terms of joint velocities and makes the robot to follow desired trajectories by the manipulator and platform within its workspace. The developed robot system performs its action intelligently according to the sensed environmental criteria within its search space. To verify the effectiveness of the proposed immune-based motion planner for WMM, simulations as well as experimental results are presented in various unknown environments.

  17. Odometry and Laser Scanner Fusion Based on a Discrete Extended Kalman Filter for Robotic Platooning Guidance

    PubMed Central

    Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier

    2011-01-01

    This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications. PMID:22164079

  18. Odometry and laser scanner fusion based on a discrete extended Kalman Filter for robotic platooning guidance.

    PubMed

    Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier

    2011-01-01

    This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications.

  19. Fuzzy Behavior Modulation with Threshold Activation for Autonomous Vehicle Navigation

    NASA Technical Reports Server (NTRS)

    Tunstel, Edward

    2000-01-01

    This paper describes fuzzy logic techniques used in a hierarchical behavior-based architecture for robot navigation. An architectural feature for threshold activation of fuzzy-behaviors is emphasized, which is potentially useful for tuning navigation performance in real world applications. The target application is autonomous local navigation of a small planetary rover. Threshold activation of low-level navigation behaviors is the primary focus. A preliminary assessment of its impact on local navigation performance is provided based on computer simulations.

  20. Cerebellum Augmented Rover Development

    NASA Technical Reports Server (NTRS)

    King, Matthew

    2005-01-01

    Bio-Inspired Technologies and Systems (BITS) are a very natural result of thinking about Nature's way of solving problems. Knowledge of animal behaviors an be used in developing robotic behaviors intended for planetary exploration. This is the expertise of the JFL BITS Group and has served as a philosophical model for NMSU RioRobolab. Navigation is a vital function for any autonomous system. Systems must have the ability to determine a safe path between their current location and some target location. The MER mission, as well as other JPL rover missions, uses a method known as dead-reckoning to determine position information. Dead-reckoning uses wheel encoders to sense the wheel's rotation. In a sandy environment such as Mars, this method is highly inaccurate because the wheels will slip in the sand. Improving positioning error will allow the speed of an autonomous navigating rover to be greatly increased. Therefore, local navigation based upon landmark tracking is desirable in planetary exploration. The BITS Group is developing navigation technology based upon landmark tracking. Integration of the current rover architecture with a cerebellar neural network tracking algorithm will demonstrate that this approach to navigation is feasible and should be implemented in future rover and spacecraft missions.

  1. Flight Testing ALHAT Precision Landing Technologies Integrated Onboard the Morpheus Rocket Vehicle

    NASA Technical Reports Server (NTRS)

    Carson, John M. III; Robertson, Edward A.; Trawny, Nikolas; Amzajerdian, Farzin

    2015-01-01

    A suite of prototype sensors, software, and avionics developed within the NASA Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project were terrestrially demonstrated onboard the NASA Morpheus rocket-propelled Vertical Testbed (VTB) in 2014. The sensors included a LIDAR-based Hazard Detection System (HDS), a Navigation Doppler LIDAR (NDL) velocimeter, and a long-range Laser Altimeter (LAlt) that enable autonomous and safe precision landing of robotic or human vehicles on solid solar system bodies under varying terrain lighting conditions. The flight test campaign with the Morpheus vehicle involved a detailed integration and functional verification process, followed by tether testing and six successful free flights, including one night flight. The ALHAT sensor measurements were integrated into a common navigation solution through a specialized ALHAT Navigation filter that was employed in closed-loop flight testing within the Morpheus Guidance, Navigation and Control (GN&C) subsystem. Flight testing on Morpheus utilized ALHAT for safe landing site identification and ranking, followed by precise surface-relative navigation to the selected landing site. The successful autonomous, closed-loop flight demonstrations of the prototype ALHAT system have laid the foundation for the infusion of safe, precision landing capabilities into future planetary exploration missions.

  2. Nonparametric Online Learning Control for Soft Continuum Robot: An Enabling Technique for Effective Endoscopic Navigation

    PubMed Central

    Lee, Kit-Hang; Fu, Denny K.C.; Leong, Martin C.W.; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong

    2017-01-01

    Abstract Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments. PMID:29251567

  3. Nonparametric Online Learning Control for Soft Continuum Robot: An Enabling Technique for Effective Endoscopic Navigation.

    PubMed

    Lee, Kit-Hang; Fu, Denny K C; Leong, Martin C W; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong; Kwok, Ka-Wai

    2017-12-01

    Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments.

  4. Cognitive patterns: giving autonomy some context

    NASA Astrophysics Data System (ADS)

    Dumond, Danielle; Stacy, Webb; Geyer, Alexandra; Rousseau, Jeffrey; Therrien, Mike

    2013-05-01

    Today's robots require a great deal of control and supervision, and are unable to intelligently respond to unanticipated and novel situations. Interactions between an operator and even a single robot take place exclusively at a very low, detailed level, in part because no contextual information about a situation is conveyed or utilized to make the interaction more effective and less time consuming. Moreover, the robot control and sensing systems do not learn from experience and, therefore, do not become better with time or apply previous knowledge to new situations. With multi-robot teams, human operators, in addition to managing the low-level details of navigation and sensor management while operating single robots, are also required to manage inter-robot interactions. To make the most use of robots in combat environments, it will be necessary to have the capability to assign them new missions (including providing them context information), and to have them report information about the environment they encounter as they proceed with their mission. The Cognitive Patterns Knowledge Generation system (CPKG) has the ability to connect to various knowledge-based models, multiple sensors, and to a human operator. The CPKG system comprises three major internal components: Pattern Generation, Perception/Action, and Adaptation, enabling it to create situationally-relevant abstract patterns, match sensory input to a suitable abstract pattern in a multilayered top-down/bottom-up fashion similar to the mechanisms used for visual perception in the brain, and generate new abstract patterns. The CPKG allows the operator to focus on things other than the operation of the robot(s).

  5. Development of a compact continuum tubular robotic system for nasopharyngeal biopsy.

    PubMed

    Wu, Liao; Song, Shuang; Wu, Keyu; Lim, Chwee Ming; Ren, Hongliang

    2017-03-01

    Traditional posterior nasopharyngeal biopsy using a flexible nasal endoscope has the risks of abrasion and injury to the nasal mucosa and thus causing trauma to the patient. Recently, a new class of robots known as continuum tubular robots (CTRs) provide a novel solution to the challenge with miniaturized size, curvilinear maneuverability, and capability of avoiding collision within the nasal environment. This paper presents a compact CTR which is 35 cm in total length, 10 cm in diameter, 2.15 kg in weight, and easy to be integrated with a robotic arm to perform more complicated operations. Structural design, end-effector design, and workspace analysis are described in detail. In addition, teleoperation of the CTR using a haptic input device is developed for position control in 3D space. Moreover, by integrating the robot with three electromagnetic tracking sensors, a navigation system together with a shape reconstruction algorithm is developed. Comprehensive experiments are conducted to test the functionality of the proposed prototype; experiment results show that under teleoperation, the system has an accuracy of 2.20 mm in following a linear path, an accuracy of 2.01 mm in following a circular path, and a latency time of 0.1 s. It is also found that the proposed shape reconstruction algorithm has a mean error of around 1 mm along the length of the tubes. Besides, the feasibility and effectiveness of the proposed robotic system being applied to posterior nasopharyngeal biopsy are demonstrated by a cadaver experiment. The proposed robotic system holds promise to enhance clinical operation in transnasal procedures.

  6. Overview of Fiber-Optical Sensors

    NASA Technical Reports Server (NTRS)

    Depaula, Ramon P.; Moore, Emery L.

    1987-01-01

    Design, development, and sensitivity of sensors using fiber optics reviewed. State-of-the-art and probable future developments of sensors using fiber optics described in report including references to work in field. Serves to update previously published surveys. Systems incorporating fiber-optic sensors used in medical diagnosis, navigation, robotics, sonar, power industry, and industrial controls.

  7. Hyperspectral Imaging and Obstacle Detection for Robotics Navigation

    DTIC Science & Technology

    2005-09-01

    anatomy and diffraction process. 17 3.3 Technical Specifications of the System A. Brimrose AOTF Video Adaptor Specifications: Material TeO2 Active...sampled from glass case on person 2’s belt 530 pixels 20 pick-up white sampled from body panels of pick-up 600 pixels 21 pick-up blue sampled from

  8. PRIMUS: autonomous navigation in open terrain with a tracked vehicle

    NASA Astrophysics Data System (ADS)

    Schaub, Guenter W.; Pfaendner, Alfred H.; Schaefer, Christoph

    2004-09-01

    The German experimental robotics program PRIMUS (PRogram for Intelligent Mobile Unmanned Systems) is focused on solutions for autonomous driving in unknown open terrain, over several project phases under specific realization aspects for more than 12 years. The main task of the program is to develop algorithms for a high degree of autonomous navigation skills with off-the-shelf available hardware/sensor technology and to integrate this into military vehicles. For obstacle detection a Dornier-3D-LADAR is integrated on a tracked vehicle "Digitized WIESEL 2". For road-following a digital video camera and a visual perception module from the Universitaet der Bundeswehr Munchen (UBM) has been integrated. This paper gives an overview of the PRIMUS program with a focus on the last program phase D (2001 - 2003). This includes the system architecture, the description of the modes of operation and the technology development with the focus on obstacle avoidance and obstacle classification using a 3-D LADAR. A collection of experimental results and a short look at the next steps in the German robotics program will conclude the paper.

  9. Mini AERCam: A Free-Flying Robot for Space Inspection

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven

    2001-01-01

    The NASA Johnson Space Center Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a free-flying camera system for remote viewing and inspection of human spacecraft. The AERCam project team is currently developing a miniaturized version of AERCam known as Mini AERCam, a spherical nanosatellite 7.5 inches in diameter. Mini AERCam development builds on the success of AERCam Sprint, a 1997 Space Shuttle flight experiment, by integrating new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving these productivity-enhancing capabilities in a smaller package depends on aggressive component miniaturization. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion, rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for laboratory demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides on-orbit views of the Space Shuttle and International Space Station unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by space-walking crewmembers.

  10. Reactive navigation in extremely dense and highly intricate environments

    PubMed Central

    2017-01-01

    Reactive navigation is a well-known paradigm for controlling an autonomous mobile robot, which suggests making all control decisions through some light processing of the current/recent sensor data. Among the many advantages of this paradigm are: 1) the possibility to apply it to robots with limited and low-priced hardware resources, and 2) the fact of being able to safely navigate a robot in completely unknown environments containing unpredictable moving obstacles. As a major disadvantage, nevertheless, the reactive paradigm may occasionally cause robots to get trapped in certain areas of the environment—typically, these conflicting areas have a large concave shape and/or are full of closely-spaced obstacles. In this last respect, an enormous effort has been devoted to overcome such a serious drawback during the last two decades. As a result of this effort, a substantial number of new approaches for reactive navigation have been put forward. Some of these approaches have clearly improved the way how a reactively-controlled robot can move among densely cluttered obstacles; some other approaches have essentially focused on increasing the variety of obstacle shapes and sizes that could be successfully circumnavigated; etc. In this paper, as a starting point, we choose the best existing reactive approach to move in densely cluttered environments, and we also choose the existing reactive approach with the greatest ability to circumvent large intricate-shaped obstacles. Then, we combine these two approaches in a way that makes the most of them. From the experimental point of view, we use both simulated and real scenarios of challenging complexity for testing purposes. In such scenarios, we demonstrate that the combined approach herein proposed clearly outperforms the two individual approaches on which it is built. PMID:29287078

  11. Clinical applicability of robot-guided contact-free laser osteotomy in cranio-maxillo-facial surgery: in-vitro simulation and in-vivo surgery in minipig mandibles.

    PubMed

    Baek, K-W; Deibel, W; Marinov, D; Griessen, M; Bruno, A; Zeilhofer, H-F; Cattin, Ph; Juergens, Ph

    2015-12-01

    Laser was being used in medicine soon after its invention. However, it has been possible to excise hard tissue with lasers only recently, and the Er:YAG laser is now established in the treatment of damaged teeth. Recently experimental studies have investigated its use in bone surgery, where its major advantages are freedom of cutting geometry and precision. However, these advantages become apparent only when the system is used with robotic guidance. The main challenge is ergonomic integration of the laser and the robot, otherwise the surgeon's space in the operating theatre is obstructed during the procedure. Here we present our first experiences with an integrated, miniaturised laser system guided by a surgical robot. An Er:YAG laser source and the corresponding optical system were integrated into a composite casing that was mounted on a surgical robotic arm. The robot-guided laser system was connected to a computer-assisted preoperative planning and intraoperative navigation system, and the laser osteotome was used in an operating theatre to create defects of different shapes in the mandibles of 6 minipigs. Similar defects were created on the opposite side with a piezoelectric (PZE) osteotome and a conventional drill guided by a surgeon. The performance was analysed from the points of view of the workflow, ergonomics, ease of use, and safety features. The integrated robot-guided laser osteotome can be ergonomically used in the operating theatre. The computer-assisted and robot-guided laser osteotome is likely to be suitable for clinical use for ostectomies that require considerable accuracy and individual shape. Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; Bender, S.R.

    Most fuzzy logic-based reasoning schemes developed for robot control are fully reactive, i.e., the reasoning modules consist of fuzzy rule bases that represent direct mappings from the stimuli provided by the perception systems to the responses implemented by the motion controllers. Due to their totally reactive nature, such reasoning systems can encounter problems such as infinite loops and limit cycles. In this paper, we proposed an approach to remedy these problems by adding a memory and memory-related behaviors to basic reactive systems. Three major types of memory behaviors are addressed: memory creation, memory management, and memory utilization. These are firstmore » presented, and examples of their implementation for the recognition of limit cycles during the navigation of an autonomous robot in a priori unknown environments are then discussed.« less

  13. An assisted navigation training framework based on judgment theory using sparse and discrete human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano

    2009-01-01

    This paper aims to present a new framework to train people with severe motor disabilities steering an assisted mobile robot (AMR), such as a powered wheelchair. Users with high level of motor disabilities are not able to use standard HMIs, which provide a continuous command signal (e. g. standard joystick). For this reason HMIs providing a small set of simple commands, which are sparse and discrete in time must be used (e. g. scanning interface, or brain computer interface), making very difficult to steer the AMR. In this sense, the assisted navigation training framework (ANTF) is designed to train users driving the AMR, in indoor structured environments, using this type of HMIs. Additionally it provides user characterization on steering the robot, which will later be used to adapt the AMR navigation system to human competence steering the AMR. A rule-based lens (RBL) model is used to characterize users on driving the AMR. Individual judgment performance choosing the best manoeuvres is modeled using a genetic-based policy capturing (GBPC) technique characterized to infer non-compensatory judgment strategies from human decision data. Three user models, at three different learning stages, using the RBL paradigm, are presented.

  14. AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation

    PubMed Central

    Yuan, Xin; Martínez-Ortega, José-Fernán; Fernández, José Antonio Sánchez; Eckert, Martina

    2017-01-01

    In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM) applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF) are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure. PMID:28531135

  15. Nonlinear adaptive formation control for a class of autonomous holonomic planetary exploration rovers

    NASA Astrophysics Data System (ADS)

    Ganji, Farid

    This dissertation presents novel nonlinear adaptive formation controllers for a heterogeneous group of holonomic planetary exploration rovers navigating over flat terrains with unknown soil types and surface conditions. A leader-follower formation control architecture is employed. In the first part, using a point-mass model for robots and a Coulomb-viscous friction model for terrain resistance, direct adaptive control laws and a formation speed-adaptation strategy are developed for formation navigation over unknown and changing terrain in the presence of actuator saturation. On-line estimates of terrain frictional parameters compensate for unknown terrain resistance and its variations. In saturation events over difficult terrain, the formation speed is reduced based on the speed of the slowest saturated robot, using internal fleet communication and a speed-adaptation strategy, so that the formation error stays bounded and small. A formal proof for asymptotic stability of the formation system in non-saturated conditions is given. The performance of robot controllers are verified using a modular 3-robot formation simulator. Simulations show that the formation errors reduce to zero asymptotically under non-saturated conditions as is guaranteed by the theoretical proof. In the second part, the proposed adaptive control methodology is extended for formation control of a class of omnidirectional rovers with three independently-driven universal holonomic rigid wheels, where the rovers' rigid-body dynamics, drive-system electromechanical characteristics, and wheel-ground interaction mechanics are incorporated. Holonomic rovers have the ability to move simultaneously and independently in translation and rotation, rendering great maneuverability and agility, which makes them suitable for formation navigation. Novel nonlinear adaptive control laws are designed for the input voltages of the three wheel-drive motors. The motion resistance, which is due to the sinkage of rover wheels in soft planetary terrain, is modeled using classical terramechanics theory. The unknown system parameters for adaptive estimation pertain to the rolling resistance forces and scrubbing resistance torques at the wheel-terrain interfaces. Novel terramechanical formulas for terrain resistance forces and torques are derived via considering the universal holonomic wheels as rigid toroidal wheels moving forward and/or sideways as well as turning on soft ground. The asymptotic stability of the formation control system is rigorously proved using Lyapunov's direct method.

  16. SAVA 3: A testbed for integration and control of visual processes

    NASA Technical Reports Server (NTRS)

    Crowley, James L.; Christensen, Henrik

    1994-01-01

    The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.

  17. The MITy micro-rover: Sensing, control, and operation

    NASA Technical Reports Server (NTRS)

    Malafeew, Eric; Kaliardos, William

    1994-01-01

    The sensory, control, and operation systems of the 'MITy' Mars micro-rover are discussed. It is shown that the customized sun tracker and laser rangefinder provide internal, autonomous dead reckoning and hazard detection in unstructured environments. The micro-rover consists of three articulated platforms with sensing, processing and payload subsystems connected by a dual spring suspension system. A reactive obstacle avoidance routine makes intelligent use of robot-centered laser information to maneuver through cluttered environments. The hazard sensors include a rangefinder, inclinometers, proximity sensors and collision sensors. A 486/66 laptop computer runs the graphical user interface and programming environment. A graphical window displays robot telemetry in real time and a small TV/VCR is used for real time supervisory control. Guidance, navigation, and control routines work in conjunction with the mapping and obstacle avoidance functions to provide heading and speed commands that maneuver the robot around obstacles and towards the target.

  18. Interaction dynamics of multiple autonomous mobile robots in bounded spatial domains

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1989-01-01

    A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.

  19. Quantifying Traversability of Terrain for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Seraji, Homayoun; Werger, Barry

    2005-01-01

    A document presents an updated discussion on a method of autonomous navigation for a robotic vehicle navigating across rough terrain. The method involves, among other things, the use of a measure of traversability, denoted the fuzzy traversability index, which embodies the information about the slope and roughness of terrain obtained from analysis of images acquired by cameras mounted on the robot. The improvements presented in the report focus on the use of the fuzzy traversability index to generate a traversability map and a grid map for planning the safest path for the robot. Once grid traversability values have been computed, they are utilized for rejecting unsafe path segments and for computing a traversalcost function for ranking candidate paths, selected by a search algorithm, from a specified initial position to a specified final position. The output of the algorithm is a set of waypoints designating a path having a minimal-traversal cost.

  20. Learning for Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Angelova, Anelia; Howard, Andrew; Matthies, Larry; Tang, Benyang; Turmon, Michael; Mjolsness, Eric

    2005-01-01

    Robotic ground vehicles for outdoor applications have achieved some remarkable successes, notably in autonomous highway following (Dickmanns, 1987), planetary exploration (1), and off-road navigation on Earth (1). Nevertheless, major challenges remain to enable reliable, high-speed, autonomous navigation in a wide variety of complex, off-road terrain. 3-D perception of terrain geometry with imaging range sensors is the mainstay of off-road driving systems. However, the stopping distance at high speed exceeds the effective lookahead distance of existing range sensors. Prospects for extending the range of 3-D sensors is strongly limited by sensor physics, eye safety of lasers, and related issues. Range sensor limitations also allow vehicles to enter large cul-de-sacs even at low speed, leading to long detours. Moreover, sensing only terrain geometry fails to reveal mechanical properties of terrain that are critical to assessing its traversability, such as potential for slippage, sinkage, and the degree of compliance of potential obstacles. Rovers in the Mars Exploration Rover (MER) mission have got stuck in sand dunes and experienced significant downhill slippage in the vicinity of large rock hazards. Earth-based off-road robots today have very limited ability to discriminate traversable vegetation from non-traversable vegetation or rough ground. It is impossible today to preprogram a system with knowledge of these properties for all types of terrain and weather conditions that might be encountered.

  1. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    PubMed

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  2. Tele-Operated Lunar Rover Navigation Using Lidar

    NASA Technical Reports Server (NTRS)

    Pedersen, Liam; Allan, Mark B.; Utz, Hans, Heinrich; Deans, Matthew C.; Bouyssounouse, Xavier; Choi, Yoonhyuk; Fluckiger, Lorenzo; Lee, Susan Y.; To, Vinh; Loh, Jonathan; hide

    2012-01-01

    Near real-time tele-operated driving on the lunar surface remains constrained by bandwidth and signal latency despite the Moon s relative proximity. As part of our work within NASA s Human-Robotic Systems Project (HRS), we have developed a stand-alone modular LIDAR based safeguarded tele-operation system of hardware, middleware, navigation software and user interface. The system has been installed and tested on two distinct NASA rovers-JSC s Centaur2 lunar rover prototype and ARC s KRex research rover- and tested over several kilometers of tele-operated driving at average sustained speeds of 0.15 - 0.25 m/s around rocks, slopes and simulated lunar craters using a deliberately constrained telemetry link. The navigation system builds onboard terrain and hazard maps, returning highest priority sections to the off-board operator as permitted by bandwidth availability. It also analyzes hazard maps onboard and can stop the vehicle prior to contacting hazards. It is robust to severe pose errors and uses a novel scan alignment algorithm to compensate for attitude and elevation errors.

  3. A robot control architecture supported on contraction theory

    NASA Astrophysics Data System (ADS)

    Silva, Jorge; Sequeira, João; Santos, Cristina

    2017-01-01

    This paper proposes fundamentals for stability and success of a global system composed by a mobile robot, a real environment and a navigation architecture with time constraints. Contraction theory is a typical framework that provides tools and properties to prove the stability and convergence of the global system to a unique fixed point that identifies the mission success. A stability indicator based on the combination contraction property is developed to identify the mission success as a stability measure. The architecture is fully designed through C1 nonlinear dynamical systems and feedthrough maps, which makes it amenable for contraction analysis. Experiments in a realistic and uncontrolled environment are realised to verify if inherent perturbations of the sensory information and of the environment affect the stability and success of the global system.

  4. Online Aerial Terrain Mapping for Ground Robot Navigation

    PubMed Central

    Peterson, John; Chaudhry, Haseeb; Abdelatty, Karim; Bird, John; Kochersberger, Kevin

    2018-01-01

    This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle’s overhead view to inform the ground vehicle’s path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These terrain classes are used to estimate relative navigation costs for the ground vehicle so energy-efficient paths may be generated and then executed. The two vehicles are registered in a common coordinate frame using a real-time kinematic global positioning system (RTK GPS) and all image processing is performed onboard the unmanned aerial vehicle, which minimizes the data exchanged between the vehicles. This paper describes the architecture of the system and quantifies the registration errors between the vehicles. PMID:29461496

  5. Online Aerial Terrain Mapping for Ground Robot Navigation.

    PubMed

    Peterson, John; Chaudhry, Haseeb; Abdelatty, Karim; Bird, John; Kochersberger, Kevin

    2018-02-20

    This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle's overhead view to inform the ground vehicle's path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These terrain classes are used to estimate relative navigation costs for the ground vehicle so energy-efficient paths may be generated and then executed. The two vehicles are registered in a common coordinate frame using a real-time kinematic global positioning system (RTK GPS) and all image processing is performed onboard the unmanned aerial vehicle, which minimizes the data exchanged between the vehicles. This paper describes the architecture of the system and quantifies the registration errors between the vehicles.

  6. Terrain interaction with the quarter scale beam walker

    NASA Technical Reports Server (NTRS)

    Chun, Wendell H.; Price, S.; Spiessbach, A.

    1990-01-01

    Frame walkers are a class of mobile robots that are robust and capable mobility platforms. Variations of the frame walker robot are in commercial use today. Komatsu Ltd. of Japan developed the Remotely Controlled Underwater Surveyor (ReCUS) and Normed Shipyards of France developed the Marine Robot (RM3). Both applications of the frame walker concept satisfied robotic mobility requirements that could not be met by a wheeled or tracked design. One vehicle design concept that falls within this class of mobile robots is the walking beam. A one-quarter scale prototype of the walking beam was built by Martin Marietta to evaluate the potential merits of utilizing the vehicle as a planetary rover. The initial phase of prototype rover testing was structured to evaluate the mobility performance aspects of the vehicle. Performance parameters such as vehicle power, speed, and attitude control were evaluated as a function of the environment in which the prototype vehicle was tested. Subsequent testing phases will address the integrated performance of the vehicle and a local navigation system.

  7. Terrain Interaction With The Quarter Scale Beam Walker

    NASA Astrophysics Data System (ADS)

    Chun, Wendell H.; Price, R. S.; Spiessbach, Andrew J.

    1990-03-01

    Frame walkers are a class of mobile robots that are robust and capable mobility platforms. Variations of the frame walker robot are in commercial use today. Komatsu Ltd. of Japan developed the Remotely Controlled Underwater Surveyor (ReCUS) and Normed Shipyards of France developed the Marine Robot (RM3). Both applications of the frame walker concept satisfied robotic mobility requirements that could not be met by a wheeled or tracked design. One vehicle design concept that falls within this class of mobile robots is the walking beam. A one-quarter scale prototype of the walking beam was built by Martin Marietta to evaluate the potential merits of utilizing the vehicle as a planetary rover. The initial phase of prototype rover testing was structured to evaluate the mobility performance aspects of the vehicle. Performance parameters such as vehicle power, speed, and attitude control were evaluated as a function of the environment in which the prototype vehicle was tested. Subsequent testing phases will address the integrated performance of the vehicle and a local navigation system.

  8. Query Processing for Probabilistic State Diagrams Describing Multiple Robot Navigation in an Indoor Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czejdo, Bogdan; Bhattacharya, Sambit; Ferragut, Erik M

    2012-01-01

    This paper describes the syntax and semantics of multi-level state diagrams to support probabilistic behavior of cooperating robots. The techniques are presented to analyze these diagrams by querying combined robots behaviors. It is shown how to use state abstraction and transition abstraction to create, verify and process large probabilistic state diagrams.

  9. Pose Measurement Performance of the Argon Relative Navigation Sensor Suite in Simulated Flight Conditions

    NASA Technical Reports Server (NTRS)

    Galante, Joseph M.; Eepoel, John Van; Strube, Matt; Gill, Nat; Gonzalez, Marcelo; Hyslop, Andrew; Patrick, Bryan

    2012-01-01

    Argon is a flight-ready sensor suite with two visual cameras, a flash LIDAR, an on- board flight computer, and associated electronics. Argon was designed to provide sensing capabilities for relative navigation during proximity, rendezvous, and docking operations between spacecraft. A rigorous ground test campaign assessed the performance capability of the Argon navigation suite to measure the relative pose of high-fidelity satellite mock-ups during a variety of simulated rendezvous and proximity maneuvers facilitated by robot manipulators in a variety of lighting conditions representative of the orbital environment. A brief description of the Argon suite and test setup are given as well as an analysis of the performance of the system in simulated proximity and rendezvous operations.

  10. A new Concept for High Resolution Benthic Mapping and Data Aquisition: MANSIO-VIATOR

    NASA Astrophysics Data System (ADS)

    Flögel, S.

    2015-12-01

    Environmental conditions within sensitive seafloor ecosystems such as cold-seep provinces or cold-water coral reef communities vary temporally and spatially over a wide range of scales. Some of these are regularly monitored via short periods of intense shipborne activity or low resolution, fixed location studies by benthic lander systems. Long term measurements of larger areas and volumes are ususally coupled to costly infrastructure investments such as cabled observatories. In space exploration, a combination of fixed and mobile systems working together are commonly used, e.g. lander systems coupled to rovers, to tackle observational needs that are very similar to deep-sea data aquisition. The analogies between space and deep-sea research motivated the German Helmholtz Association to setup the joint research program ROBEX (Robotic Exploration under extreme conditions). The program objectives are to identify, develop and verify technological synergies between the robotic exploration of e.g. the moon and the deep-sea. Within ROBEX, the mobility of robots is a vital element for research missions due to valuable scientifice return potential from different sites as opposed to static landers. Within this context, we developed a new mobile crawler system (VIATOR, latin for traveller) and a fixed lander component for energy and data transfer (MANSIO, latin for housing/shelter). This innovative MANSIO-VIATOR system has been developed during the past 2.5 years. The caterpillar driven component is developed to conduct high resolution opitcal mapping and repeated monitoring of physical and biogeochemical parameters along transects. The system operates fully autonomously including navigational components such as camera and laser scanners, as well as marker based near-field navigation used in space technology. This new concept of data aquisition by a submarine crawler in combination with a fixed lander further opens up marine exploration possibilities.

  11. Biobotic insect swarm based sensor networks for search and rescue

    NASA Astrophysics Data System (ADS)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  12. Robotically assisted velocity-sensitive triggered focused ultrasound surgery

    NASA Astrophysics Data System (ADS)

    Maier, Florian; Brunner, Alexander; Jenne, Jürgen W.; Krafft, Axel J.; Semmler, Wolfhard; Bock, Michael

    2012-11-01

    Magnetic Resonance (MR) guided Focused Ultrasound Surgery (FUS) of abdominal organs is challenging due to breathing motion and limited patient access in the MR environment. In this work, an experimental robotically assisted FUS setup was combined with a MR-based navigator technique to realize motion-compensated sonications and online temperature imaging. Experiments were carried out in a static phantom, during periodic manual motion of the phantom without triggering, and with triggering to evaluate the triggering method. In contrast to the non-triggered sonication, the results of the triggered sonication show a confined symmetric temperature distribution. In conclusion, the velocity sensitive navigator can be employed for triggered FUS to compensate for periodic motion. Combined with the robotic FUS setup, flexible treatment of abdominal targets might be realized.

  13. Visual identification and similarity measures used for on-line motion planning of autonomous robots in unknown environments

    NASA Astrophysics Data System (ADS)

    Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar

    2017-02-01

    In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.

  14. Bildbasierte Navigation eines mobilen Roboters mittels omnidirektionaler und schwenkbarer Kamera

    NASA Astrophysics Data System (ADS)

    Nierobisch, Thomas; Hoffmann, Frank; Krettek, Johannes; Bertram, Torsten

    Dieser Beitrag präsentiert einen neuartigen Ansatz zur entkoppelten Regelung der Kamera-Blickrichtung und der Bewegung eines mobilen Roboters im Kontext der bildbasierten Navigation. Eine schwenkbare monokulare Kamera hält unabhängig von der Roboterbewegung die relevanten Merkmale für die Navigation im Sichtfeld. Die Entkopplung der Kamerablickrichtung von der eigentlichen Roboterbewegung wird durch die Projektion der Merkmale auf eine virtuelle Bildebene realisiert. In der virtuellen Bildebene hängt die Ausprägung der visuellen Merkmale für die bildbasierte Regelung nur von der Roboterposition ab und ist unabhängig gegenüber der tatsächlichen Blickrichtung der Kamera. Durch die Schwenkbarkeit der monokularen Kamera wird der Arbeitsbereich, über dem sich ein Referenzbild zur bildbasierten Regelung eignet, gegenüber einer statischen Kamera signifikant vergrößert. Dies ermöglicht die Navigation auch in texturarmen Umgebungen, die wenig verwertbare Textur- und Strukturmerkmale aufweisen.

  15. Mobile Robot Navigation and Obstacle Avoidance in Unstructured Outdoor Environments

    DTIC Science & Technology

    2017-12-01

    to pull information from the network, it subscribes to a specific topic and is able to receive the messages that are published to that topic. In order...total artificial potential field is characterized “as the sum of an attractive potential pulling the robot toward the goal…and a repulsive potential...of robot laser_max = 20; % robot laser view horizon goaldist = 0.5; % distance metric for reaching goal goali = 1

  16. A Concept of the Differentially Driven Three Wheeled Robot

    NASA Astrophysics Data System (ADS)

    Kelemen, M.; Colville, D. J.; Kelemenová, T.; Virgala, I.; Miková, L.

    2013-08-01

    The paper deals with the concept of a differentially driven three wheeled robot. The main task for the robot is to follow the navigation black line on white ground. The robot also contains anti-collision sensors for avoiding obstacles on track. Students learn how to deal with signals from sensors and how to control DC motors. Students work with the controller and develop the locomotion algorithm and can attend a competition

  17. A self-paced motor imagery based brain-computer interface for robotic wheelchair control.

    PubMed

    Tsui, Chun Sing Louis; Gan, John Q; Hu, Huosheng

    2011-10-01

    This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.

  18. Toward image guided robotic surgery: system validation.

    PubMed

    Herrell, Stanley D; Kwartowitz, David Morgan; Milhoua, Paul M; Galloway, Robert L

    2009-02-01

    Navigation for current robotic assisted surgical techniques is primarily accomplished through a stereo pair of laparoscopic camera images. These images provide standard optical visualization of the surface but provide no subsurface information. Image guidance methods allow the visualization of subsurface information to determine the current position in relationship to that of tracked tools. A robotic image guided surgical system was designed and implemented based on our previous laboratory studies. A series of experiments using tissue mimicking phantoms with injected target lesions was performed. The surgeon was asked to resect "tumor" tissue with and without the augmentation of image guidance using the da Vinci robotic surgical system. Resections were performed and compared to an ideal resection based on the radius of the tumor measured from preoperative computerized tomography. A quantity called the resection ratio, that is the ratio of resected tissue compared to the ideal resection, was calculated for each of 13 trials and compared. The mean +/- SD resection ratio of procedures augmented with image guidance was smaller than that of procedures without image guidance (3.26 +/- 1.38 vs 9.01 +/- 1.81, p <0.01). Additionally, procedures using image guidance were shorter (average 8 vs 13 minutes). It was demonstrated that there is a benefit from the augmentation of laparoscopic video with updated preoperative images. Incorporating our image guided system into the da Vinci robotic system improved overall tissue resection, as measured by our metric. Adding image guidance to the da Vinci robotic surgery system may result in the potential for improvements such as the decreased removal of benign tissue while maintaining an appropriate surgical margin.

  19. Multi-agent robotic systems and applications for satellite missions

    NASA Astrophysics Data System (ADS)

    Nunes, Miguel A.

    A revolution in the space sector is happening. It is expected that in the next decade there will be more satellites launched than in the previous sixty years of space exploration. Major challenges are associated with this growth of space assets such as the autonomy and management of large groups of satellites, in particular with small satellites. There are two main objectives for this work. First, a flexible and distributed software architecture is presented to expand the possibilities of spacecraft autonomy and in particular autonomous motion in attitude and position. The approach taken is based on the concept of distributed software agents, also referred to as multi-agent robotic system. Agents are defined as software programs that are social, reactive and proactive to autonomously maximize the chances of achieving the set goals. Part of the work is to demonstrate that a multi-agent robotic system is a feasible approach for different problems of autonomy such as satellite attitude determination and control and autonomous rendezvous and docking. The second main objective is to develop a method to optimize multi-satellite configurations in space, also known as satellite constellations. This automated method generates new optimal mega-constellations designs for Earth observations and fast revisit times on large ground areas. The optimal satellite constellation can be used by researchers as the baseline for new missions. The first contribution of this work is the development of a new multi-agent robotic system for distributing the attitude determination and control subsystem for HiakaSat. The multi-agent robotic system is implemented and tested on the satellite hardware-in-the-loop testbed that simulates a representative space environment. The results show that the newly proposed system for this particular case achieves an equivalent control performance when compared to the monolithic implementation. In terms on computational efficiency it is found that the multi-agent robotic system has a consistent lower CPU load of 0.29 +/- 0.03 compared to 0.35 +/- 0.04 for the monolithic implementation, a 17.1 % reduction. The second contribution of this work is the development of a multi-agent robotic system for the autonomous rendezvous and docking of multiple spacecraft. To compute the maneuvers guidance, navigation and control algorithms are implemented as part of the multi-agent robotic system. The navigation and control functions are implemented using existing algorithms, but one important contribution of this section is the introduction of a new six degrees of freedom guidance method which is part of the guidance, navigation and control architecture. This new method is an explicit solution to the guidance problem, and is particularly useful for real time guidance for attitude and position, as opposed to typical guidance methods which are based on numerical solutions, and therefore are computationally intensive. A simulation scenario is run for docking four CubeSats deployed radially from a launch vehicle. Considering fully actuated CubeSats, the simulations show docking maneuvers that are successfully completed within 25 minutes which is approximately 30% of a full orbital period in low earth orbit. The final section investigates the problem of optimization of satellite constellations for fast revisit time, and introduces a new method to generate different constellation configurations that are evaluated with a genetic algorithm. Two case studies are presented. The first is the optimization of a constellation for rapid coverage of the oceans of the globe in 24 hours or less. Results show that for an 80 km sensor swath width 50 satellites are required to cover the oceans with a 24 hour revisit time. The second constellation configuration study focuses on the optimization for the rapid coverage of the North Atlantic Tracks for air traffic monitoring in 3 hours or less. The results show that for a fixed swath width of 160 km and for a 3 hour revisit time 52 satellites are required.

  20. Rationale and Roadmap for Moon Exploration

    NASA Astrophysics Data System (ADS)

    Foing, B. H.; ILEWG Team

    We discuss the different rationale for Moon exploration. This starts with areas of scientific investigations: clues on the formation and evolution of rocky planets, accretion and bombardment in the inner solar system, comparative planetology processes (tectonic, volcanic, impact cratering, volatile delivery), records astrobiology, survival of organics; past, present and future life. The rationale includes also the advancement of instrumentation: Remote sensing miniaturised instruments; Surface geophysical and geochemistry package; Instrument deployment and robotic arm, nano-rover, sampling, drilling; Sample finder and collector. There are technologies in robotic and human exploration that are a drive for the creativity and economical competitivity of our industries: Mecha-electronics-sensors; Tele control, telepresence, virtual reality; Regional mobility rover; Autonomy and Navigation; Artificially intelligent robots, Complex systems, Man-Machine interface and performances. Moon-Mars Exploration can inspire solutions to global Earth sustained development: In-Situ Utilisation of resources; Establishment of permanent robotic infrastructures, Environmental protection aspects; Life sciences laboratories; Support to human exploration. We also report on the IAA Cosmic Study on Next Steps In Exploring Deep Space, and ongoing IAA Cosmic Studies, ILEWG/IMEWG ongoing activities, and we finally discuss possible roadmaps for robotic and human exploration, starting with the Moon-Mars missions for the coming decade, and building effectively on joint technology developments.

  1. A Novel Telemanipulated Robotic Assistant for Surgical Endoscopy: Preclinical Application to ESD.

    PubMed

    Zorn, Lucile; Nageotte, Florent; Zanne, Philippe; Legner, Andras; Dallemagne, Bernard; Marescaux, Jacques; de Mathelin, Michel

    2018-04-01

    Minimally invasive surgical interventions in the gastrointestinal tract, such as endoscopic submucosal dissection (ESD), are very difficult for surgeons when performed with standard flexible endoscopes. Robotic flexible systems have been identified as a solution to improve manipulation. However, only a few such systems have been brought to preclinical trials as of now. As a result, novel robotic tools are required. We developed a telemanipulated robotic device, called STRAS, which aims to assist surgeons during intraluminal surgical endoscopy. This is a modular system, based on a flexible endoscope and flexible instruments, which provides 10 degrees of freedom (DoFs). The modularity allows the user to easily set up the robot and to navigate toward the operating area. The robot can then be teleoperated using master interfaces specifically designed to intuitively control all available DoFs. STRAS capabilities have been tested in laboratory conditions and during preclinical experiments. We report 12 colorectal ESDs performed in pigs, in which large lesions were successfully removed. Dissection speeds are compared with those obtained in similar conditions with the manual Anubiscope platform from Karl Storz. We show significant improvements ( ). These experiments show that STRAS (v2) provides sufficient DoFs, workspace, and force to perform ESD, that it allows a single surgeon to perform all the surgical tasks and those performances are improved with respect to manual systems. The concepts developed for STRAS are validated and could bring new tools for surgeons to improve comfort, ease, and performances for intraluminal surgical endoscopy.

  2. Terrain classification in navigation of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Dodds, David R.

    1991-03-01

    In this paper we describe a method of path planning that integrates terrain classification (by means of fractals) the certainty grid method of spatial representation Kehtarnavaz Griswold collision-zones Dubois Prade fuzzy temporal and spatial knowledge and non-point sized qualitative navigational planning. An initially planned (" end-to-end" ) path is piece-wise modified to accommodate known and inferred moving obstacles and includes attention to time-varying multiple subgoals which may influence a section of path at a time after the robot has begun traversing that planned path.

  3. Development of Autonomous Boat-Type Robot for Automated Velocity Measurement in Straight Natural River

    NASA Astrophysics Data System (ADS)

    Sanjou, Michio; Nagasaka, Tsuyoshi

    2017-11-01

    The present study describes an automated system to measure the river flow velocity. A combination of the camera-tracking system and the Proportional/Integral/Derivative (PID) control could enable the boat-type robot to remain in position against the mainstream; this results in reasonable evaluation of the mean velocity by a duty ratio which corresponds to rotation speed of the screw propeller. A laser range finder module was installed to measure the local water depth. Reliable laboratory experiments with the prototype boat robot and electromagnetic velocimetry were conducted to obtain a calibration curve that connects the duty ratio and mean current velocity. The remaining accuracy in the target point was also examined quantitatively. The fluctuation in the spanwise direction is within half of the robot length. It was therefore found that the robot remains well within the target region. We used two-dimensional navigation tests to guarantee that the prototype moved smoothly to the target points and successfully measured the streamwise velocity profiles across the mainstream. Moreover, the present robot was found to move successfully not only in the laboratory flume but also in a small natural river. The robot could move smoothly from the starting point near the operator's site toward the target point where the velocity is measured, and it could evaluate the cross-sectional discharge.

  4. Soft Robotic Manipulator for Improving Dexterity in Minimally Invasive Surgery.

    PubMed

    Diodato, Alessandro; Brancadoro, Margherita; De Rossi, Giacomo; Abidi, Haider; Dall'Alba, Diego; Muradore, Riccardo; Ciuti, Gastone; Fiorini, Paolo; Menciassi, Arianna; Cianchetti, Matteo

    2018-02-01

    Combining the strengths of surgical robotics and minimally invasive surgery (MIS) holds the potential to revolutionize surgical interventions. The MIS advantages for the patients are obvious, but the use of instrumentation suitable for MIS often translates in limiting the surgeon capabilities (eg, reduction of dexterity and maneuverability and demanding navigation around organs). To overcome these shortcomings, the application of soft robotics technologies and approaches can be beneficial. The use of devices based on soft materials is already demonstrating several advantages in all the exploitation areas where dexterity and safe interaction are needed. In this article, the authors demonstrate that soft robotics can be synergistically used with traditional rigid tools to improve the robotic system capabilities and without affecting the usability of the robotic platform. A bioinspired soft manipulator equipped with a miniaturized camera has been integrated with the Endoscopic Camera Manipulator arm of the da Vinci Research Kit both from hardware and software viewpoints. Usability of the integrated system has been evaluated with nonexpert users through a standard protocol to highlight difficulties in controlling the soft manipulator. This is the first time that an endoscopic tool based on soft materials has been integrated into a surgical robot. The soft endoscopic camera can be easily operated through the da Vinci Research Kit master console, thus increasing the workspace and the dexterity, and without limiting intuitive and friendly use.

  5. Observability-Based Guidance and Sensor Placement

    NASA Astrophysics Data System (ADS)

    Hinson, Brian T.

    Control system performance is highly dependent on the quality of sensor information available. In a growing number of applications, however, the control task must be accomplished with limited sensing capabilities. This thesis addresses these types of problems from a control-theoretic point-of-view, leveraging system nonlinearities to improve sensing performance. Using measures of observability as an information quality metric, guidance trajectories and sensor distributions are designed to improve the quality of sensor information. An observability-based sensor placement algorithm is developed to compute optimal sensor configurations for a general nonlinear system. The algorithm utilizes a simulation of the nonlinear system as the source of input data, and convex optimization provides a scalable solution method. The sensor placement algorithm is applied to a study of gyroscopic sensing in insect wings. The sensor placement algorithm reveals information-rich areas on flexible insect wings, and a comparison to biological data suggests that insect wings are capable of acting as gyroscopic sensors. An observability-based guidance framework is developed for robotic navigation with limited inertial sensing. Guidance trajectories and algorithms are developed for range-only and bearing-only navigation that improve navigation accuracy. Simulations and experiments with an underwater vehicle demonstrate that the observability measure allows tuning of the navigation uncertainty.

  6. Artificial pheromone for path selection by a foraging swarm of robots.

    PubMed

    Campo, Alexandre; Gutiérrez, Alvaro; Nouyan, Shervin; Pinciroli, Carlo; Longchamp, Valentin; Garnier, Simon; Dorigo, Marco

    2010-11-01

    Foraging robots involved in a search and retrieval task may create paths to navigate faster in their environment. In this context, a swarm of robots that has found several resources and created different paths may benefit strongly from path selection. Path selection enhances the foraging behavior by allowing the swarm to focus on the most profitable resource with the possibility for unused robots to stop participating in the path maintenance and to switch to another task. In order to achieve path selection, we implement virtual ants that lay artificial pheromone inside a network of robots. Virtual ants are local messages transmitted by robots; they travel along chains of robots and deposit artificial pheromone on the robots that are literally forming the chain and indicating the path. The concentration of artificial pheromone on the robots allows them to decide whether they are part of a selected path. We parameterize the mechanism with a mathematical model and provide an experimental validation using a swarm of 20 real robots. We show that our mechanism favors the selection of the closest resource is able to select a new path if a selected resource becomes unavailable and selects a newly detected and better resource when possible. As robots use very simple messages and behaviors, the system would be particularly well suited for swarms of microrobots with minimal abilities.

  7. Analysis of time in establishing synchronization radio communication system with expanded spectrum conditions for communication with mobile robots

    NASA Astrophysics Data System (ADS)

    Latinovic, T. S.; Kalabic, S. B.; Barz, C. R.; Petrica, P. Paul; Pop-Vădean, A.

    2018-01-01

    This paper analyzes the influence of the Doppler Effect on the length of time to establish synchronization pseudorandom sequences in radio communications systems with an expanded spectrum. Also, this paper explores the possibility of using secure wireless communication for modular robots. Wireless communication could be used for local and global communication. We analyzed a radio communication system integrator, including the two effects of the Doppler signal on the duration of establishing synchronization of the received and locally generated pseudorandom sequence. The effects of the impact of the variability of the phase were analyzed between the said sequences and correspondence of the phases of these signals with the interval of time of acquisition of received sequences. An analysis of these impacts is essential in the transmission of signal and protection of the transfer of information in the communication systems with an expanded range (telecommunications, mobile telephony, Global Navigation Satellite System GNSS, and wireless communication). Results show that wireless communication can provide a safety approach for communication with mobile robots.

  8. Slime mold uses an externalized spatial "memory" to navigate in complex environments.

    PubMed

    Reid, Chris R; Latty, Tanya; Dussutour, Audrey; Beekman, Madeleine

    2012-10-23

    Spatial memory enhances an organism's navigational ability. Memory typically resides within the brain, but what if an organism has no brain? We show that the brainless slime mold Physarum polycephalum constructs a form of spatial memory by avoiding areas it has previously explored. This mechanism allows the slime mold to solve the U-shaped trap problem--a classic test of autonomous navigational ability commonly used in robotics--requiring the slime mold to reach a chemoattractive goal behind a U-shaped barrier. Drawn into the trap, the organism must rely on other methods than gradient-following to escape and reach the goal. Our data show that spatial memory enhances the organism's ability to navigate in complex environments. We provide a unique demonstration of a spatial memory system in a nonneuronal organism, supporting the theory that an externalized spatial memory may be the functional precursor to the internal memory of higher organisms.

  9. Multi-Sensor Person Following in Low-Visibility Scenarios

    PubMed Central

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. PMID:22163506

  10. Multi-sensor person following in low-visibility scenarios.

    PubMed

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment.

  11. Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning

    NASA Astrophysics Data System (ADS)

    Kawewong, Aram; Honda, Yutaro; Tsuboyama, Manabu; Hasegawa, Osamu

    Robot path-planning is one of the important issues in robotic navigation. This paper presents a novel robot path-planning approach based on the associative memory using Self-Organizing Incremental Neural Networks (SOINN). By the proposed method, an environment is first autonomously divided into a set of path-fragments by junctions. Each fragment is represented by a sequence of preliminarily generated common patterns (CPs). In an online manner, a robot regards the current path as the associative path-fragments, each connected by junctions. The reasoning technique is additionally proposed for decision making at each junction to speed up the exploration time. Distinct from other methods, our method does not ignore the important information about the regions between junctions (path-fragments). The resultant number of path-fragments is also less than other method. Evaluation is done via Webots physical 3D-simulated and real robot experiments, where only distance sensors are available. Results show that our method can represent the environment effectively; it enables the robot to solve the goal-oriented navigation problem in only one episode, which is actually less than that necessary for most of the Reinforcement Learning (RL) based methods. The running time is proved finite and scales well with the environment. The resultant number of path-fragments matches well to the environment.

  12. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  13. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  14. Design and validation of an intelligent wheelchair towards a clinically-functional outcome.

    PubMed

    Boucher, Patrice; Atrash, Amin; Kelouwani, Sousso; Honoré, Wormser; Nguyen, Hai; Villemure, Julien; Routhier, François; Cohen, Paul; Demers, Louise; Forget, Robert; Pineau, Joelle

    2013-06-17

    Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode.

  15. Intelligent robots for planetary exploration and construction

    NASA Technical Reports Server (NTRS)

    Albus, James S.

    1992-01-01

    Robots capable of practical applications in planetary exploration and construction will require realtime sensory-interactive goal-directed control systems. A reference model architecture based on the NIST Real-time Control System (RCS) for real-time intelligent control systems is suggested. RCS partitions the control problem into four basic elements: behavior generation (or task decomposition), world modeling, sensory processing, and value judgment. It clusters these elements into computational nodes that have responsibility for specific subsystems, and arranges these nodes in hierarchical layers such that each layer has characteristic functionality and timing. Planetary exploration robots should have mobility systems that can safely maneuver over rough surfaces at high speeds. Walking machines and wheeled vehicles with dynamic suspensions are candidates. The technology of sensing and sensory processing has progressed to the point where real-time autonomous path planning and obstacle avoidance behavior is feasible. Map-based navigation systems will support long-range mobility goals and plans. Planetary construction robots must have high strength-to-weight ratios for lifting and positioning tools and materials in six degrees-of-freedom over large working volumes. A new generation of cable-suspended Stewart platform devices and inflatable structures are suggested for lifting and positioning materials and structures, as well as for excavation, grading, and manipulating a variety of tools and construction machinery.

  16. Video rate color region segmentation for mobile robotic applications

    NASA Astrophysics Data System (ADS)

    de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline

    2005-08-01

    Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.

  17. AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program

    NASA Astrophysics Data System (ADS)

    Gothard, Benny M.

    2002-02-01

    One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.

  18. Programmable near-infrared ranging system

    DOEpatents

    Everett, Jr., Hobart R.

    1989-01-01

    A high angular resolution ranging system particularly suitable for indoor plications involving mobile robot navigation and collision avoidance uses a programmable array of light emitters that can be sequentially incremented by a microprocessor. A plurality of adjustable level threshold detectors are used in an optical receiver for detecting the threshold level of the light echoes produced when light emitted from one or more of the emitters is reflected by a target or object in the scan path of the ranging system.

  19. An Intelligent Robotic Hospital Bed for Safe Transportation of Critical Neurosurgery Patients Along Crowded Hospital Corridors.

    PubMed

    Wang, Chao; Savkin, Andrey V; Clout, Ray; Nguyen, Hung T

    2015-09-01

    We present a novel design of an intelligent robotic hospital bed, named Flexbed, with autonomous navigation ability. The robotic bed is developed for fast and safe transportation of critical neurosurgery patients without changing beds. Flexbed is more efficient and safe during the transportation process comparing to the conventional hospital beds. Flexbed is able to avoid en-route obstacles with an efficient easy-to-implement collision avoidance strategy when an obstacle is nearby and to move towards its destination at maximum speed when there is no threat of collision. We present extensive simulation results of navigation of Flexbed in the crowded hospital corridor environments with moving obstacles. Moreover, results of experiments with Flexbed in the real world scenarios are also presented and discussed.

  20. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    NASA Astrophysics Data System (ADS)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  1. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    Team KuuKulgur waits to begin the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  2. A Concise Guide to Feature Histograms with Applications to LIDAR-Based Spacecraft Relative Navigation

    NASA Astrophysics Data System (ADS)

    Rhodes, Andrew P.; Christian, John A.; Evans, Thomas

    2017-12-01

    With the availability and popularity of 3D sensors, it is advantageous to re-examine the use of point cloud descriptors for the purpose of pose estimation and spacecraft relative navigation. One popular descriptor is the oriented unique repeatable clustered viewpoint feature histogram (OUR-CVFH), which is most often utilized in personal and industrial robotics to simultaneously recognize and navigate relative to an object. Recent research into using the OUR-CVFH descriptor for spacecraft navigation has produced favorable results. Since OUR-CVFH is the most recent innovation in a large family of feature histogram point cloud descriptors, discussions of parameter settings and insights into its functionality are spread among various publications and online resources. This paper organizes the history of feature histogram point cloud descriptors for a straightforward explanation of their evolution. This article compiles all the requisite information needed to implement OUR-CVFH into one location, as well as providing useful suggestions on how to tune the generation parameters. This work is beneficial for anyone interested in using this histogram descriptor for object recognition or navigation - may it be personal robotics or spacecraft navigation.

  3. Navigation concepts for MR image-guided interventions.

    PubMed

    Moche, Michael; Trampel, Robert; Kahn, Thomas; Busse, Harald

    2008-02-01

    The ongoing development of powerful magnetic resonance imaging techniques also allows for advanced possibilities to guide and control minimally invasive interventions. Various navigation concepts have been described for practically all regions of the body. The specific advantages and limitations of these concepts largely depend on the magnet design of the MR scanner and the interventional environment. Open MR scanners involve minimal patient transfer, which improves the interventional workflow and reduces the need for coregistration, ie, the mapping of spatial coordinates between imaging and intervention position. Most diagnostic scanners, in contrast, do not allow the physician to guide his instrument inside the magnet and, consequently, the patient needs to be moved out of the bore. Although adequate coregistration and navigation concepts for closed-bore scanners are technically more challenging, many developments are driven by the well-known capabilities of high-field systems and their better economic value. Advanced concepts such as multimodal overlays, augmented reality displays, and robotic assistance devices are still in their infancy but might propel the use of intraoperative navigation. The goal of this work is to give an update on MRI-based navigation and related techniques and to briefly discuss the clinical experience and limitations of some selected systems. (Copyright) 2008 Wiley-Liss, Inc.

  4. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems.

    PubMed

    Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika

    2017-06-01

    This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations.

  5. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems

    NASA Astrophysics Data System (ADS)

    Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika

    2017-06-01

    Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations.

  6. Human-like robots for space and hazardous environments

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  7. Human-like robots for space and hazardous environments

    NASA Astrophysics Data System (ADS)

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  8. Extraction of user's navigation commands from upper body force interaction in walker assisted gait.

    PubMed

    Frizera Neto, Anselmo; Gallego, Juan A; Rocon, Eduardo; Pons, José L; Ceres, Ramón

    2010-08-05

    The advances in technology make possible the incorporation of sensors and actuators in rollators, building safer robots and extending the use of walkers to a more diverse population. This paper presents a new method for the extraction of navigation related components from upper-body force interaction data in walker assisted gait. A filtering architecture is designed to cancel: (i) the high-frequency noise caused by vibrations on the walker's structure due to irregularities on the terrain or walker's wheels and (ii) the cadence related force components caused by user's trunk oscillations during gait. As a result, a third component related to user's navigation commands is distinguished. For the cancelation of high-frequency noise, a Benedict-Bordner g-h filter was designed presenting very low values for Kinematic Tracking Error ((2.035 +/- 0.358).10(-2) kgf) and delay ((1.897 +/- 0.3697).10(1)ms). A Fourier Linear Combiner filtering architecture was implemented for the adaptive attenuation of about 80% of the cadence related components' energy from force data. This was done without compromising the information contained in the frequencies close to such notch filters. The presented methodology offers an effective cancelation of the undesired components from force data, allowing the system to extract in real-time voluntary user's navigation commands. Based on this real-time identification of voluntary user's commands, a classical approach to the control architecture of the robotic walker is being developed, in order to obtain stable and safe user assisted locomotion.

  9. TH-C-17A-06: A Hardware Implementation and Evaluation of Robotic SPECT: Toward Molecular Imaging Onboard Radiation Therapy Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, S; Touch, M; Bowsher, J

    Purpose: To construct a robotic SPECT system and demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch. The system has potential for on-board functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was developed utilizing a Digirad 2020tc detector and a KUKA KR150-L110 robot. An imaging study was performed with the PET CT Phantom, which includes 5 spheres: 10, 13, 17, 22 and 28 mm in diameter. Sphere-tobackground concentration ratio was 6:1 of Tc99m. The phantom was placed on a flat-top couch. SPECT projections were acquired with a parallel-hole collimator andmore » a single pinhole collimator. The robotic system navigated the detector tracing the flat-top table to maintain the closest possible proximity to the phantom. For image reconstruction, detector trajectories were described by six parameters: radius-of-rotation, x and z detector shifts, and detector rotation θ, tilt ϕ and twist γ. These six parameters were obtained from the robotic system by calibrating the robot base and tool coordinates. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector-to-COR (center-ofrotation) distance. In acquisitions with background at 1/6th sphere activity concentration, photopeak contamination was heavy, yet the 17, 22, and 28 mm diameter spheres were readily observed with the parallel hole imaging, and the single, targeted sphere (28 mm diameter) was readily observed in the pinhole region-of-interest (ROI) imaging. Conclusion: Onboard SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frame could be an effective means to estimate detector pose for use in SPECT image reconstruction. PHS/NIH/NCI grant R21-CA156390-01A1.« less

  10. Toward understanding social cues and signals in human–robot interaction: effects of robot gaze and proxemic behavior

    PubMed Central

    Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot AvaTM mobile robotics platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals. PMID:24348434

  11. Human-like robots for space and hazardous environments

    NASA Technical Reports Server (NTRS)

    Cogley, Allen; Gustafson, David; White, Warren; Dyer, Ruth; Hampton, Tom (Editor); Freise, Jon (Editor)

    1990-01-01

    The three year goal for this NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of rough terrain crossing, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation and path planning skills. These goals came from the concept that the robot should have the abilities of both a planetary rover and a hazardous waste site scout.

  12. Human-like robots for space and hazardous environments

    NASA Astrophysics Data System (ADS)

    Cogley, Allen; Gustafson, David; White, Warren; Dyer, Ruth; Hampton, Tom; Freise, Jon

    The three year goal for this NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of rough terrain crossing, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation and path planning skills. These goals came from the concept that the robot should have the abilities of both a planetary rover and a hazardous waste site scout.

  13. Localization and Mapping Using a Non-Central Catadioptric Camera System

    NASA Astrophysics Data System (ADS)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  14. Robotic vehicles for planetary exploration

    NASA Astrophysics Data System (ADS)

    Wilcox, Brian; Matthies, Larry; Gennery, Donald; Cooper, Brian; Nguyen, Tam; Litwin, Todd; Mishkin, Andrew; Stone, Henry

    A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets.

  15. Robotic vehicles for planetary exploration

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Matthies, Larry; Gennery, Donald; Cooper, Brian; Nguyen, Tam; Litwin, Todd; Mishkin, Andrew; Stone, Henry

    1992-01-01

    A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets.

  16. A Robot to Help Make the Rounds

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This paper presents a discussion on the Pyxis HelpMate SecurePak (SP) trackless robotic courier designed by Transitions Research Corporation, to navigate autonomously throughout medical facilities, transporting pharmaceuticals, laboratory specimens, equipment, supplies, meals, medical records, and radiology films between support departments and nursing floors.

  17. Kennedy Space Center, Space Shuttle Processing, and International Space Station Program Overview

    NASA Technical Reports Server (NTRS)

    Higginbotham, Scott Alan

    2011-01-01

    Topics include: International Space Station assembly sequence; Electrical power substation; Thermal control substation; Guidance, navigation and control; Command data and handling; Robotics; Human and robotic integration; Additional modes of re-supply; NASA and International partner control centers; Space Shuttle ground operations.

  18. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.

  19. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.

  20. Thermal Image Sensing Model for Robotic Planning and Search.

    PubMed

    Castro Jiménez, Lídice E; Martínez-García, Edgar A

    2016-08-08

    This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image's intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot's course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach.

  1. Reasoning and planning in dynamic domains: An experiment with a mobile robot

    NASA Technical Reports Server (NTRS)

    Georgeff, M. P.; Lansky, A. L.; Schoppers, M. J.

    1987-01-01

    Progress made toward having an autonomous mobile robot reason and plan complex tasks in real-world environments is described. To cope with the dynamic and uncertain nature of the world, researchers use a highly reactive system to which is attributed attitudes of belief, desire, and intention. Because these attitudes are explicitly represented, they can be manipulated and reasoned about, resulting in complex goal-directed and reflective behaviors. Unlike most planning systems, the plans or intentions formed by the system need only be partly elaborated before it decides to act. This allows the system to avoid overly strong expectations about the environment, overly constrained plans of action, and other forms of over-commitment common to previous planners. In addition, the system is continuously reactive and has the ability to change its goals and intentions as situations warrant. Thus, while the system architecture allows for reasoning about means and ends in much the same way as traditional planners, it also posseses the reactivity required for survival in complex real-world domains. The system was tested using SRI's autonomous robot (Flakey) in a scenario involving navigation and the performance of an emergency task in a space station scenario.

  2. Image navigation as a means to expand the boundaries of fluorescence-guided surgery

    NASA Astrophysics Data System (ADS)

    Brouwer, Oscar R.; Buckle, Tessa; Bunschoten, Anton; Kuil, Joeri; Vahrmeijer, Alexander L.; Wendler, Thomas; Valdés-Olmos, Renato A.; van der Poel, Henk G.; van Leeuwen, Fijs W. B.

    2012-05-01

    Hybrid tracers that are both radioactive and fluorescent help extend the use of fluorescence-guided surgery to deeper structures. Such hybrid tracers facilitate preoperative surgical planning using (3D) scintigraphic images and enable synchronous intraoperative radio- and fluorescence guidance. Nevertheless, we previously found that improved orientation during laparoscopic surgery remains desirable. Here we illustrate how intraoperative navigation based on optical tracking of a fluorescence endoscope may help further improve the accuracy of hybrid surgical guidance. After feeding SPECT/CT images with an optical fiducial as a reference target to the navigation system, optical tracking could be used to position the tip of the fluorescence endoscope relative to the preoperative 3D imaging data. This hybrid navigation approach allowed us to accurately identify marker seeds in a phantom setup. The multispectral nature of the fluorescence endoscope enabled stepwise visualization of the two clinically approved fluorescent dyes, fluorescein and indocyanine green. In addition, the approach was used to navigate toward the prostate in a patient undergoing robot-assisted prostatectomy. Navigation of the tracked fluorescence endoscope toward the target identified on SPECT/CT resulted in real-time gradual visualization of the fluorescent signal in the prostate, thus providing an intraoperative confirmation of the navigation accuracy.

  3. Development of technology for creating intelligent control systems for power plants and propulsion systems for marine robotic systems

    NASA Astrophysics Data System (ADS)

    Iakovleva, E. V.; Momot, B. A.

    2017-10-01

    The object of this study is to develop a power plant and an electric propulsion control system for autonomous remotely controlled vessels. The tasks of the study are as follows: to assess remotely controlled vessels usage reasonability, to define the requirements for this type of vessel navigation. In addition, the paper presents the analysis of technical diagnostics systems. The developed electric propulsion control systems for vessels should provide improved reliability and efficiency of the propulsion complex to ensure the profitability of remotely controlled vessels.

  4. A new minimally invasive heart surgery instrument for atrial fibrillation treatment: first in vitro and animal tests.

    PubMed

    Abadie, J; Faure, A; Chaillet, N; Rougeot, P; Beaufort, D; Goldstein, J P; Finlay, P A; Bogaerts, G

    2006-06-01

    The paper presents a new robotic system for beating heart surgery. The final goal of this project is to develop a tele-operated system for the thoracoscopic treatment of patients with atrial fibrillation. The system consists of a robot that moves an innovative end-effector used to perform lines as in the Cox-Maze technique. The device is an electrode mesh that is introduced in the thorax through a trocar and is deployed inside the left atrium, where it can create selective ablation lines at any atrial region, using radio frequency. The current version of the umbrella has 22 electrodes. Using visual feedback from an ultrasound based navigation system, the surgeon can choose which electrodes on the mesh to activate. Once the umbrella is in contact with the endocardium of the left atrium, at the expected position, the surgeon activates the chosen electrodes sequentially. The umbrella can then be moved to another position. In vitro and in vivo animal tests have been carried out in order to test and improve the instrument, the robotic system and the operative procedure. The performed trials proved the ability of the system to treat atrial fibrillation. More in vivo tests are currently being performed to make the robot and its device ready for clinical use. Copyright 2006 John Wiley & Sons, Ltd.

  5. Mars Rover Navigation Results Using Sun Sensor Heading Determination

    NASA Technical Reports Server (NTRS)

    Volpe, Richard

    1998-01-01

    Upcoming missions to the surface of Mars will use mobile robots to traverse long distances from the landing site. To prepare for these missions, the prototype rover, Rocky 7, has been tested in desert field trials conducted with a team of planetary scientists. While several new capabilities have been demonstrated, foremost among these was sun-sensor based traversal of natural terrain totaling a distance of one kilometer. This paper describes navigation results obtained in the field tests, where cross-track error was only 6% of distance traveled. Comparison with previous results of other planetary rover systems shows this to be a significant improvement.

  6. Design and control of an embedded vision guided robotic fish with multiple control surfaces.

    PubMed

    Yu, Junzhi; Wang, Kai; Tan, Min; Zhang, Jianwei

    2014-01-01

    This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface.

  7. Initial clinical application of a robotically steerable catheter system in endovascular aneurysm repair.

    PubMed

    Riga, Celia; Bicknell, Colin; Cheshire, Nicholas; Hamady, Mohamad

    2009-04-01

    To report the initial clinical use of a robotically steerable catheter during endovascular aneurysm repair (EVAR) in order to assess this novel and innovative approach in a clinical setting. Following a series of in-vitro studies and procedure rehearsals using a pulsatile silicon aneurysm model, a 78-year-old man underwent robot-assisted EVAR of a 5.9-cm infrarenal abdominal aortic aneurysm. During the standard procedure, a 14-F remotely steerable robotic catheter was used to successfully navigate through the aneurysm sac, cannulate the contralateral limb of a bifurcated stent-graft under fluoroscopic guidance, and place stiff wires using fine and controlled movements. The procedure was completed successfully. There were no postoperative complications, and computed tomographic angiography prior to discharge and at 3 months confirmed that the stent-graft remained in good position, with no evidence of an endoleak. EVAR using robotically-steerable catheters is feasible. This technology may simplify more complex procedures by increasing the accuracy of vessel cannulation and perhaps reduce procedure times and radiation exposure to the patient and operator.

  8. Design and Control of an Embedded Vision Guided Robotic Fish with Multiple Control Surfaces

    PubMed Central

    Wang, Kai; Tan, Min; Zhang, Jianwei

    2014-01-01

    This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface. PMID:24688413

  9. Introduction to haptics for neurosurgeons.

    PubMed

    L'Orsa, Rachael; Macnab, Chris J B; Tavakoli, Mahdi

    2013-01-01

    Robots are becoming increasingly relevant to neurosurgeons, extending a neurosurgeon's physical capabilities, improving navigation within the surgical landscape when combined with advanced imaging, and propelling the movement toward minimally invasive surgery. Most surgical robots, however, isolate surgeons from the full range of human senses during a procedure. This forces surgeons to rely on vision alone for guidance through the surgical corridor, which limits the capabilities of the system, requires significant operator training, and increases the surgeon's workload. Incorporating haptics into these systems, ie, enabling the surgeon to "feel" forces experienced by the tool tip of the robot, could render these limitations obsolete by making the robot feel more like an extension of the surgeon's own body. Although the use of haptics in neurosurgical robots is still mostly the domain of research, neurosurgeons who keep abreast of this emerging field will be more prepared to take advantage of it as it becomes more prevalent in operating theaters. Thus, this article serves as an introduction to the field of haptics for neurosurgeons. We not only outline the current and future benefits of haptics but also introduce concepts in the fields of robotic technology and computer control. This knowledge will allow readers to be better aware of limitations in the technology that can affect performance and surgical outcomes, and "knowing the right questions to ask" will be invaluable for surgeons who have purchasing power within their departments.

  10. Using arm and hand gestures to command robots during stealth operations

    NASA Astrophysics Data System (ADS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-06-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-offreedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  11. Using Arm and Hand Gestures to Command Robots during Stealth Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-01-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  12. Architectural Design for a Mars Communications and Navigation Orbital Infrastructure

    NASA Technical Reports Server (NTRS)

    Ceasrone R. J.; Hastrup, R. C.; Bell, D. J.; Roncoli, R. B.; Nelson, K.

    1999-01-01

    The planet Mars has become the focus of an intensive series of missions that span decades of time, a wide array of international agencies and an evolution from robotics to humans. The number of missions to Mars at any one time, and over a period of time, is unprecedented in the annals of space exploration. To meet the operational needs of this exploratory fleet will require the implementation of new architectural concepts for communications and navigation. To this end, NASA's Jet Propulsion Laboratory has begun to define and develop a Mars communications and navigation orbital infrastructure. This architecture will make extensive use of assets at Mars, as well as use of traditional Earth-based assets, such as the Deep Space Network, DSN. Indeed, the total system can be thought of as an extension of DSN nodes and services to the Mars in-situ region. The concept has been likened to the beginnings of an interplanetary Internet that will bring the exploration of Mars right into our living rooms. The paper will begin with a high-level overview of the concept for the Mars communications and navigation infrastructure. Next, the mission requirements will be presented. These will include the relatively near-term needs of robotic landers, rovers, ascent vehicles, balloons, airplanes, and possibly orbiting, arriving and departing spacecraft. Requirements envisioned for the human exploration of Mars will also be described. The important Mars orbit design trades on telecommunications and navigation capabilities will be summarized, and the baseline infrastructure will be described. A roadmap of NASA's plan to evolve this infrastructure over time will be shown. Finally, launch considerations and delivery to Mars will be briefly treated.

  13. A multimodal interface for real-time soldier-robot teaming

    NASA Astrophysics Data System (ADS)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  14. Autonomous mobile platform with simultaneous localisation and mapping system for patrolling purposes

    NASA Astrophysics Data System (ADS)

    Mitka, Łukasz; Buratowski, Tomasz

    2017-10-01

    This work describes an autonomous mobile platform for supervision and surveillance purposes. The system can be adapted for mounting on different types of vehicles. The platform is based on a SLAM navigation system which performs a localization task. Sensor fusion including laser scanners, inertial measurement unit (IMU), odometry and GPS lets the system determine its position in a certain and precise way. The platform is able to create a 3D model of a supervised area and export it as a point cloud. The system can operate both inside and outside as the navigation algorithm is resistant to typical localization errors caused by wheel slippage or temporal GPS signal loss. The system is equipped with a path-planning module which allows operating in two modes. The first mode is for periodical observation of points in a selected area. The second mode is turned on in case of an alarm. When it is called, the platform moves with the fastest route to the place of the alert. The path planning is always performed online with use of the most current scans, therefore the platform is able to adjust its trajectory to the environment changes or obstacles that are in the motion. The control algorithms are developed under the Robot Operating System (ROS) since it comes with drivers for many devices used in robotics. Such a solution allows for extending the system with any type of sensor in order to incorporate its data into a created area model. Proposed appliance can be ported to other existing robotic platforms or used to develop a new platform dedicated to a specific kind of surveillance. The platform use cases are to patrol an area, such as airport or metro station, in search for dangerous substances or suspicious objects and in case of detection instantly inform security forces. Second use case is a tele-operation in hazardous area for an inspection purposes.

  15. Bioinspired microrobots

    NASA Astrophysics Data System (ADS)

    Palagi, Stefano; Fischer, Peer

    2018-06-01

    Microorganisms can move in complex media, respond to the environment and self-organize. The field of microrobotics strives to achieve these functions in mobile robotic systems of sub-millimetre size. However, miniaturization of traditional robots and their control systems to the microscale is not a viable approach. A promising alternative strategy in developing microrobots is to implement sensing, actuation and control directly in the materials, thereby mimicking biological matter. In this Review, we discuss design principles and materials for the implementation of robotic functionalities in microrobots. We examine different biological locomotion strategies, and we discuss how they can be artificially recreated in magnetic microrobots and how soft materials improve control and performance. We show that smart, stimuli-responsive materials can act as on-board sensors and actuators and that `active matter' enables autonomous motion, navigation and collective behaviours. Finally, we provide a critical outlook for the field of microrobotics and highlight the challenges that need to be overcome to realize sophisticated microrobots, which one day might rival biological machines.

  16. Current status of endovascular catheter robotics.

    PubMed

    Lumsden, Alan B; Bismuth, Jean

    2018-06-01

    In this review, we will detail the evolution of endovascular therapy as the basis for the development of catheter-based robotics. In parallel, we will outline the evolution of robotics in the surgical space and how the convergence of technology and the entrepreneurs who push this evolution have led to the development of endovascular robots. The current state-of-the-art and future directions and potential are summarized for the reader. Information in this review has been drawn primarily from our personal clinical and preclinical experience in use of catheter robotics, coupled with some ground-breaking work reported from a few other major centers who have embraced the technology's capabilities and opportunities. Several case studies demonstrating the unique capabilities of a precisely controlled catheter are presented. Most of the preclinical work was performed in the advanced imaging and navigation laboratory. In this unique facility, the interface of advanced imaging techniques and robotic guidance is being explored. Although this procedure employs a very high-tech approach to navigation inside the endovascular space, we have conveyed the kind of opportunities that this technology affords to integrate 3D imaging and 3D control. Further, we present the opportunity of semi-autonomous motion of these devices to a target. For the interventionist, enhanced precision can be achieved in a nearly radiation-free environment.

  17. Thorough exploration of complex environments with a space-based potential field

    NASA Astrophysics Data System (ADS)

    Kenealy, Alina; Primiano, Nicholas; Keyes, Alex; Lyons, Damian M.

    2015-01-01

    Robotic exploration, for the purposes of search and rescue or explosive device detection, can be improved by using a team of multiple robots. Potential field navigation methods offer natural and efficient distributed exploration algorithms in which team members are mutually repelled to spread out and cover the area efficiently. However, they also suffer from field minima issues. Liu and Lyons proposed a Space-Based Potential Field (SBPF) algorithm that disperses robots efficiently and also ensures they are driven in a distributed fashion to cover complex geometry. In this paper, the approach is modified to handle two problems with the original SBPF method: fast exploration of enclosed spaces, and fast navigation of convex obstacles. Firstly, a "gate-sensing" function was implemented. The function draws the robot to narrow openings, such as doors or corridors that it might otherwise pass by, to ensure every room can be explored. Secondly, an improved obstacle field conveyor belt function was developed which allows the robot to avoid walls and barriers while using their surface as a motion guide to avoid being trapped. Simulation results, where the modified SPBF program controls the MobileSim Pioneer 3-AT simulator program, are presented for a selection of maps that capture difficult to explore geometries. Physical robot results are also presented, where a team of Pioneer 3-AT robots is controlled by the modified SBPF program. Data collected prior to the improvements, new simulation results, and robot experiments are presented as evidence of performance improvements.

  18. ODYSSEUS autonomous walking robot: The leg/arm design

    NASA Technical Reports Server (NTRS)

    Bourbakis, N. G.; Maas, M.; Tascillo, A.; Vandewinckel, C.

    1994-01-01

    ODYSSEUS is an autonomous walking robot, which makes use of three wheels and three legs for its movement in the free navigation space. More specifically, it makes use of its autonomous wheels to move around in an environment where the surface is smooth and not uneven. However, in the case that there are small height obstacles, stairs, or small height unevenness in the navigation environment, the robot makes use of both wheels and legs to travel efficiently. In this paper we present the detailed hardware design and the simulated behavior of the extended leg/arm part of the robot, since it plays a very significant role in the robot actions (movements, selection of objects, etc.). In particular, the leg/arm consists of three major parts: The first part is a pipe attached to the robot base with a flexible 3-D joint. This pipe has a rotated bar as an extended part, which terminates in a 3-D flexible joint. The second part of the leg/arm is also a pipe similar to the first. The extended bar of the second part ends at a 2-D joint. The last part of the leg/arm is a clip-hand. It is used for selecting several small weight and size objects, and when it is in a 'closed' mode, it is used as a supporting part of the robot leg. The entire leg/arm part is controlled and synchronized by a microcontroller (68CH11) attached to the robot base.

  19. Stanford Aerospace Research Laboratory research overview

    NASA Technical Reports Server (NTRS)

    Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.

    1993-01-01

    Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator.

  20. A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring

    PubMed Central

    Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio

    2016-01-01

    This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505

Top