Magnetic resonance imaging compatible remote catheter navigation system with 3 degrees of freedom.
Tavallaei, M A; Lavdas, M K; Gelman, D; Drangova, M
2016-08-01
To facilitate MRI-guided catheterization procedures, we present an MRI-compatible remote catheter navigation system that allows remote navigation of steerable catheters with 3 degrees of freedom. The system consists of a user interface (master), a robot (slave), and an ultrasonic motor control servomechanism. The interventionalist applies conventional motions (axial, radial and plunger manipulations) on an input catheter in the master unit; this user input is measured and used by the servomechanism to control a compact catheter manipulating robot, such that it replicates the interventionalist's input motion on the patient catheter. The performance of the system was evaluated in terms of MRI compatibility (SNR and artifact), feasibility of remote navigation under real-time MRI guidance, and motion replication accuracy. Real-time MRI experiments demonstrated that catheter was successfully navigated remotely to desired target references in all 3 degrees of freedom. The system had an absolute value error of [Formula: see text]1 mm in axial catheter motion replication over 30 mm of travel and [Formula: see text] for radial catheter motion replication over [Formula: see text]. The worst case SNR drop was observed to be [Formula: see text]3 %; the robot did not introduce any artifacts in the MR images. An MRI-compatible compact remote catheter navigation system has been developed that allows remote navigation of steerable catheters with 3 degrees of freedom. The proposed system allows for safe and accurate remote catheter navigation, within conventional closed-bore scanners, without degrading MR image quality.
Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois
2017-07-01
A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.
Symbiotic Navigation in Multi-Robot Systems with Remote Obstacle Knowledge Sharing
Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori
2017-01-01
Large scale operational areas often require multiple service robots for coverage and task parallelism. In such scenarios, each robot keeps its individual map of the environment and serves specific areas of the map at different times. We propose a knowledge sharing mechanism for multiple robots in which one robot can inform other robots about the changes in map, like path blockage, or new static obstacles, encountered at specific areas of the map. This symbiotic information sharing allows the robots to update remote areas of the map without having to explicitly navigate those areas, and plan efficient paths. A node representation of paths is presented for seamless sharing of blocked path information. The transience of obstacles is modeled to track obstacles which might have been removed. A lazy information update scheme is presented in which only relevant information affecting the current task is updated for efficiency. The advantages of the proposed method for path planning are discussed against traditional method with experimental results in both simulation and real environments. PMID:28678193
Nölker, Georg; Gutleben, Klaus-Jürgen; Muntean, Bogdan; Vogt, Jürgen; Horstkotte, Dieter; Dabiri Abkenari, Lara; Akca, Ferdi; Szili-Torok, Tamas
2012-12-01
Studies have shown that remote magnetic navigation is safe and effective for ablation of atrial arrhythmias, although optimal outcomes often require frequent manual manipulation of a circular mapping catheter. The Vdrive robotic system ('Vdrive') was designed for remote navigation of circular mapping catheters to enable a fully remote procedure. This study details the first human clinical experience with remote circular catheter manipulation in the left atrium. This was a prospective, multi-centre, non-randomized consecutive case series that included patients presenting for catheter ablation of left atrial arrhythmias. Remote systems were used exclusively to manipulate both the circular mapping catheter and the ablation catheter. Patients were followed through hospital discharge. Ninety-four patients were included in the study, including 23 with paroxysmal atrial fibrillation (AF), 48 with persistent AF, and 15 suffering from atrial tachycardias. The population was predominately male (77%) with a mean age of 60.5 ± 11.7 years. The Vdrive was used for remote navigation between veins, creation of chamber maps, and gap identification with segmental isolation. The intended acute clinical endpoints were achieved in 100% of patients. Mean case time was 225.9 ± 70.5 min. Three patients (3.2%) crossed over to manual circular mapping catheter navigation. There were no adverse events related to the use of the remote manipulation system. The results of this study demonstrate that remote manipulation of a circular mapping catheter in the ablation of atrial arrhythmias is feasible and safe. Prospective randomized studies are needed to prove efficiency improvements over manual techniques.
NASA Astrophysics Data System (ADS)
Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi
This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Google glass-based remote control of a mobile robot
NASA Astrophysics Data System (ADS)
Yu, Song; Wen, Xi; Li, Wei; Chen, Genshe
2016-05-01
In this paper, we present an approach to remote control of a mobile robot via a Google Glass with the multi-function and compact size. This wearable device provides a new human-machine interface (HMI) to control a robot without need for a regular computer monitor because the Google Glass micro projector is able to display live videos around robot environments. In doing it, we first develop a protocol to establish WI-FI connection between Google Glass and a robot and then implement five types of robot behaviors: Moving Forward, Turning Left, Turning Right, Taking Pause, and Moving Backward, which are controlled by sliding and clicking the touchpad located on the right side of the temple. In order to demonstrate the effectiveness of the proposed Google Glass-based remote control system, we navigate a virtual Surveyor robot to pass a maze. Experimental results demonstrate that the proposed control system achieves the desired performance.
Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation
NASA Technical Reports Server (NTRS)
Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri
2002-01-01
The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.
Robotics in invasive cardiac electrophysiology.
Shurrab, Mohammed; Schilling, Richard; Gang, Eli; Khan, Ejaz M; Crystal, Eugene
2014-07-01
Robotic systems allow for mapping and ablation of different arrhythmia substrates replacing hand maneuvering of intracardiac catheters with machine steering. Currently there are four commercially available robotic systems. Niobe magnetic navigation system (Stereotaxis Inc., St Louis, MO) and Sensei robotic navigation system (Hansen Medical Inc., Mountain View, CA) have an established platform with at least 10 years of clinical studies looking at their efficacy and safety. AMIGO Remote Catheter System (Catheter Robotics, Inc., Mount Olive, NJ) and Catheter Guidance Control and Imaging (Magnetecs, Inglewood, CA) are in the earlier phases of implementations with ongoing feasibility and some limited clinical studies. This review discusses the advantages and limitations related to each existing system and highlights the ideal futuristic robotic system that may include the most promising features of the current ones.
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.
Research on the inspection robot for cable tunnel
NASA Astrophysics Data System (ADS)
Xin, Shihao
2017-03-01
Robot by mechanical obstacle, double end communication, remote control and monitoring software components. The mechanical obstacle part mainly uses the tracked mobile robot mechanism, in order to facilitate the design and installation of the robot, the other auxiliary swing arm; double side communication part used a combination of communication wire communication with wireless communication, great improve the communication range of the robot. When the robot is controlled by far detection range, using wired communication control, on the other hand, using wireless communication; remote control part mainly completes the inspection robot walking, navigation, positioning and identification of cloud platform control. In order to improve the reliability of its operation, the preliminary selection of IPC as the control core the movable body selection program hierarchical structure as a design basis; monitoring software part is the core part of the robot, which has a definite diagnosis Can be instead of manual simple fault judgment, instead the robot as a remote actuators, staff as long as the remote control can be, do not have to body at the scene. Four parts are independent of each other but are related to each other, the realization of the structure of independence and coherence, easy maintenance and coordination work. Robot with real-time positioning function and remote control function, greatly improves the IT operation. Robot remote monitor, to avoid the direct contact with the staff and line, thereby reducing the accident casualties, for the safety of the inspection work has far-reaching significance.
Remote navigation systems in electrophysiology.
Schmidt, Boris; Chun, Kyoung Ryul Julian; Tilz, Roland R; Koektuerk, Buelent; Ouyang, Feifan; Kuck, Karl-Heinz
2008-11-01
Today, atrial fibrillation (AF) is the dominant indication for catheter ablation in big electrophysiologists (EP) centres. AF ablation strategies are complex and technically challenging. Therefore, it would be desirable that technical innovations pursue the goal to improve catheter stability to increase the procedural success and most importantly to increase safety by helping to avoid serious complications. The most promising technical innovation aiming at the aforementioned goals is remote catheter navigation and ablation. To date, two different systems, the NIOBE magnetic navigation system (MNS, Stereotaxis, USA) and the Sensei robotic navigation system (RNS, Hansen Medical, USA), are commercially available. The following review will introduce the basic principles of the systems, will give an insight into the merits and demerits of remote navigation, and will further focus on the initial clinical experience at our centre with focus on pulmonary vein isolation (PVI) procedures.
Agarwal, Rahul; Levinson, Adam W; Allaf, Mohamad; Makarov, Danil; Nason, Alex; Su, Li-Ming
2007-11-01
Remote presence is the ability of an individual to project himself from one location to another to see, hear, roam, talk, and interact just as if that individual were actually there. The objective of this study was to evaluate the efficacy and functionality of a novel mobile robotic telementoring system controlled by a portable laptop control station linked via broadband Internet connection. RoboConsultant (RemotePresence-7; InTouch Health, Sunnyvale, CA) was employed for the purpose of intraoperative telementoring and consultation during five laparoscopic and endoscopic urologic procedures. Robot functionality including navigation, zoom capability, examination of external and internal endoscopic camera views, and telestration were evaluated. The robot was controlled by a senior surgeon from various locations ranging from an adjacent operating room to an affiliated hospital 5 miles away. The RoboConsultant performed without connection failure or interruption in each case, allowing the consulting surgeon to immerse himself and navigate within the operating room environment and provide effective communication, mentoring, telestration, and consultation. RoboConsultant provided clear, real-time, and effective telementoring and telestration and allowed the operator to experience remote presence in the operating room environment as a surgical consultant. The portable laptop control station and wireless connectivity allowed the consultant to be mobile and interact with the operating room team from virtually any location. In the future, the remote presence provided by the RoboConsultant may provide useful and effective intraoperative consultation by expert surgeons located in remote sites.
Sandia National Laboratories proof-of-concept robotic security vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrington, J.J.; Jones, D.P.; Klarer, P.R.
1989-01-01
Several years ago Sandia National Laboratories developed a prototype interior robot that could navigate autonomously inside a large complex building to air and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modified andmore » integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities. 2 refs., 3 figs.« less
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1990-01-01
A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.
Solar-based navigation for robotic explorers
NASA Astrophysics Data System (ADS)
Shillcutt, Kimberly Jo
2000-12-01
This thesis introduces the application of solar position and shadowing information to robotic exploration. Power is a critical resource for robots with remote, long-term missions, so this research focuses on the power generation capabilities of robotic explorers during navigational tasks, in addition to power consumption. Solar power is primarily considered, with the possibility of wind power also contemplated. Information about the environment, including the solar ephemeris, terrain features, time of day, and surface location, is incorporated into a planning structure, allowing robots to accurately predict shadowing and thus potential costs and gains during navigational tasks. By evaluating its potential to generate and expend power, a robot can extend its lifetime and accomplishments. The primary tasks studied are coverage patterns, with a variety of plans developed for this research. The use of sun, terrain and temporal information also enables new capabilities of identifying and following sun-synchronous and sun-seeking paths. Digital elevation maps are combined with an ephemeris algorithm to calculate the altitude and azimuth of the sun from surface locations, and to identify and map shadows. Solar navigation path simulators use this information to perform searches through two-dimensional space, while considering temporal changes. Step by step simulations of coverage patterns also incorporate time in addition to location. Evaluations of solar and wind power generation, power consumption, area coverage, area overlap, and time are generated for sets of coverage patterns, with on-board environmental information linked to the simulations. This research is implemented on the Nomad robot for the Robotic Antarctic Meteorite Search. Simulators have been developed for coverage pattern tests, as well as for sun-synchronous and sun-seeking path searches. Results of field work and simulations are reported and analyzed, with demonstrated improvements in efficiency, productivity and lifetime of robotic explorers, along with new solar navigation abilities.
Solar Thermal Utility-Scale Joint Venture Program (USJVP) Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANCINI,THOMAS R.
2001-04-01
Several years ago Sandia National Laboratories developed a prototype interior robot [1] that could navigate autonomously inside a large complex building to aid and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modifiedmore » and integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities.« less
Mapping of unknown industrial plant using ROS-based navigation mobile robot
NASA Astrophysics Data System (ADS)
Priyandoko, G.; Ming, T. Y.; Achmad, M. S. H.
2017-10-01
This research examines how humans work with teleoperated unmanned mobile robot inspection in industrial plant area resulting 2D/3D map for further critical evaluation. This experiment focuses on two parts, the way human-robot doing remote interactions using robust method and the way robot perceives the environment surround as a 2D/3D perspective map. ROS (robot operating system) as a tool was utilized in the development and implementation during the research which comes up with robust data communication method in the form of messages and topics. RGBD SLAM performs the visual mapping function to construct 2D/3D map using Kinect sensor. The results showed that the mobile robot-based teleoperated system are successful to extend human perspective in term of remote surveillance in large area of industrial plant. It was concluded that the proposed work is robust solution for large mapping within an unknown construction building.
Real-time visual mosaicking and navigation on the seafloor
NASA Astrophysics Data System (ADS)
Richmond, Kristof
Remote robotic exploration holds vast potential for gaining knowledge about extreme environments accessible to humans only with great difficulty. Robotic explorers have been sent to other solar system bodies, and on this planet into inaccessible areas such as caves and volcanoes. In fact, the largest unexplored land area on earth lies hidden in the airless cold and intense pressure of the ocean depths. Exploration in the oceans is further hindered by water's high absorption of electromagnetic radiation, which both inhibits remote sensing from the surface, and limits communications with the bottom. The Earth's oceans thus provide an attractive target for developing remote exploration capabilities. As a result, numerous robotic vehicles now routinely survey this environment, from remotely operated vehicles piloted over tethers from the surface to torpedo-shaped autonomous underwater vehicles surveying the mid-waters. However, these vehicles are limited in their ability to navigate relative to their environment. This limits their ability to return to sites with precision without the use of external navigation aids, and to maneuver near and interact with objects autonomously in the water and on the sea floor. The enabling of environment-relative positioning on fully autonomous underwater vehicles will greatly extend their power and utility for remote exploration in the furthest reaches of the Earth's waters---even under ice and under ground---and eventually in extraterrestrial liquid environments such as Europa's oceans. This thesis presents an operational, fielded system for visual navigation of underwater robotic vehicles in unexplored areas of the seafloor. The system does not depend on external sensing systems, using only instruments on board the vehicle. As an area is explored, a camera is used to capture images and a composite view, or visual mosaic, of the ocean bottom is created in real time. Side-to-side visual registration of images is combined with dead-reckoned navigation information in a framework allowing the creation and updating of large, locally consistent mosaics. These mosaics are used as maps in which the vehicle can navigate and localize itself with respect to points in the environment. The system achieves real-time performance in several ways. First, wherever possible, direct sensing of motion parameters is used in place of extracting them from visual data. Second, trajectories are chosen to enable a hierarchical search for side-to-side links which limits the amount of searching performed without sacrificing robustness. Finally, the map estimation is formulated as a sparse, linear information filter allowing rapid updating of large maps. The visual navigation enabled by the work in this thesis represents a new capability for remotely operated vehicles, and an enabling capability for a new generation of autonomous vehicles which explore and interact with remote, unknown and unstructured underwater environments. The real-time mosaic can be used on current tethered vehicles to create pilot aids and provide a vehicle user with situational awareness of the local environment and the position of the vehicle within it. For autonomous vehicles, the visual navigation system enables precise environment-relative positioning and mapping, without requiring external navigation systems, opening the way for ever-expanding autonomous exploration capabilities. The utility of this system was demonstrated in the field at sites of scientific interest using the ROVs Ventana and Tiburon operated by the Monterey Bay Aquarium Research Institute. A number of sites in and around Monterey Bay, California were mosaicked using the system, culminating in a complete imaging of the wreck site of the USS Macon , where real-time visual mosaics containing thousands of images were generated while navigating using only sensor systems on board the vehicle.
Technological advances in robotic-assisted laparoscopic surgery.
Tan, Gerald Y; Goel, Raj K; Kaouk, Jihad H; Tewari, Ashutosh K
2009-05-01
In this article, the authors describe the evolution of urologic robotic systems and the current state-of-the-art features and existing limitations of the da Vinci S HD System (Intuitive Surgical, Inc.). They then review promising innovations in scaling down the footprint of robotic platforms, the early experience with mobile miniaturized in vivo robots, advances in endoscopic navigation systems using augmented reality technologies and tracking devices, the emergence of technologies for robotic natural orifice transluminal endoscopic surgery and single-port surgery, advances in flexible robotics and haptics, the development of new virtual reality simulator training platforms compatible with the existing da Vinci system, and recent experiences with remote robotic surgery and telestration.
Autonomous exploration and mapping of unknown environments
NASA Astrophysics Data System (ADS)
Owens, Jason; Osteen, Phil; Fields, MaryAnne
2012-06-01
Autonomous exploration and mapping is a vital capability for future robotic systems expected to function in arbitrary complex environments. In this paper, we describe an end-to-end robotic solution for remotely mapping buildings. For a typical mapping system, an unmanned system is directed to enter an unknown building at a distance, sense the internal structure, and, barring additional tasks, while in situ, create a 2-D map of the building. This map provides a useful and intuitive representation of the environment for the remote operator. We have integrated a robust mapping and exploration system utilizing laser range scanners and RGB-D cameras, and we demonstrate an exploration and metacognition algorithm on a robotic platform. The algorithm allows the robot to safely navigate the building, explore the interior, report significant features to the operator, and generate a consistent map - all while maintaining localization.
[Experimental study of angiography using vascular interventional robot-2(VIR-2)].
Tian, Zeng-min; Lu, Wang-sheng; Liu, Da; Wang, Da-ming; Guo, Shu-xiang; Xu, Wu-yi; Jia, Bo; Zhao, De-peng; Liu, Bo; Gao, Bao-feng
2012-06-01
To verify the feasibility and safety of new vascular interventional robot system used in vascular interventional procedures. Vascular interventional robot type-2 (VIR-2) included master-slave parts of body propulsion system, image navigation systems and force feedback system, the catheter movement could achieve under automatic control and navigation, force feedback was integrated real-time, followed by in vitro pre-test in vascular model and cerebral angiography in dog. Surgeon controlled vascular interventional robot remotely, the catheter was inserted into the intended target, the catheter positioning error and the operation time would be evaluated. In vitro pre-test and animal experiment went well; the catheter can enter any branch of vascular. Catheter positioning error was less than 1 mm. The angiography operation in animal was carried out smoothly without complication; the success rate of the operation was 100% and the entire experiment took 26 and 30 minutes, efficiency was slightly improved compared with the VIR-1, and the time what staff exposed to the DSA machine was 0 minute. The resistance of force sensor can be displayed to the operator to provide a security guarantee for the operation. No surgical complications. VIR-2 is safe and feasible, and can achieve the catheter remote operation and angiography; the master-slave system meets the characteristics of traditional procedure. The three-dimensional image can guide the operation more smoothly; force feedback device provides remote real-time haptic information to provide security for the operation.
Rafii-Tari, Hedyeh; Liu, Jindong; Payne, Christopher J; Bicknell, Colin; Yang, Guang-Zhong
2014-01-01
Despite increased use of remote-controlled steerable catheter navigation systems for endovascular intervention, most current designs are based on master configurations which tend to alter natural operator tool interactions. This introduces problems to both ergonomics and shared human-robot control. This paper proposes a novel cooperative robotic catheterization system based on learning-from-demonstration. By encoding the higher-level structure of a catheterization task as a sequence of primitive motions, we demonstrate how to achieve prospective learning for complex tasks whilst incorporating subject-specific variations. A hierarchical Hidden Markov Model is used to model each movement primitive as well as their sequential relationship. This model is applied to generation of motion sequences, recognition of operator input, and prediction of future movements for the robot. The framework is validated by comparing catheter tip motions against the manual approach, showing significant improvements in the quality of catheterization. The results motivate the design of collaborative robotic systems that are intuitive to use, while reducing the cognitive workload of the operator.
Robotic navigation and ablation.
Malcolme-Lawes, L; Kanagaratnam, P
2010-12-01
Robotic technologies have been developed to allow optimal catheter stability and reproducible catheter movements with the aim of achieving contiguous and transmural lesion delivery. Two systems for remote navigation of catheters within the heart have been developed; the first is based on a magnetic navigation system (MNS) Niobe, Stereotaxis, Saint-Louis, Missouri, USA, the second is based on a steerable sheath system (Sensei, Hansen Medical, Mountain View, CA, USA). Both robotic and magnetic navigation systems have proven to be feasible for performing ablation of both simple and complex arrhythmias, particularly atrial fibrillation. Studies to date have shown similar success rates for AF ablation compared to that of manual ablation, with many groups finding a reduction in fluoroscopy times. However, the early learning curve of cases demonstrated longer procedure times, mainly due to additional setup times. With centres performing increasing numbers of robotic ablations and the introduction of a pressure monitoring system, lower power settings and instinctive driving software, complication rates are reducing, and fluoroscopy times have been lower than manual ablation in many studies. As the demand for catheter ablation for arrhythmias such as atrial fibrillation increases and the number of centres performing these ablations increases, the demand for systems which reduce the hand skill requirement and improve the comfort of the operator will also increase.
Navigation of military and space unmanned ground vehicles in unstructured terrains
NASA Technical Reports Server (NTRS)
Lescoe, Paul; Lavery, David; Bedard, Roger
1991-01-01
Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.
Evaluation of a completely robotized neurosurgical operating microscope.
Kantelhardt, Sven R; Finke, Markus; Schweikard, Achim; Giese, Alf
2013-01-01
Operating microscopes are essential for most neurosurgical procedures. Modern robot-assisted controls offer new possibilities, combining the advantages of conventional and automated systems. We evaluated the prototype of a completely robotized operating microscope with an integrated optical coherence tomography module. A standard operating microscope was fitted with motors and control instruments, with the manual control mode and balance preserved. In the robot mode, the microscope was steered by a remote control that could be fixed to a surgical instrument. External encoders and accelerometers tracked microscope movements. The microscope was additionally fitted with an optical coherence tomography-scanning module. The robotized microscope was tested on model systems. It could be freely positioned, without forcing the surgeon to take the hands from the instruments or avert the eyes from the oculars. Positioning error was about 1 mm, and vibration faded in 1 second. Tracking of microscope movements, combined with an autofocus function, allowed determination of the focus position within the 3-dimensional space. This constituted a second loop of navigation independent from conventional infrared reflector-based techniques. In the robot mode, automated optical coherence tomography scanning of large surface areas was feasible. The prototype of a robotized optical coherence tomography-integrated operating microscope combines the advantages of a conventional manually controlled operating microscope with a remote-controlled positioning aid and a self-navigating microscope system that performs automated positioning tasks such as surface scans. This demonstrates that, in the future, operating microscopes may be used to acquire intraoperative spatial data, volume changes, and structural data of brain or brain tumor tissue.
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
Ganji, Yusof; Janabi-Sharifi, Farrokh; Cheema, Asim N
2011-12-01
Despite the recent advances in catheter design and technology, intra-cardiac navigation during electrophysiology procedures remains challenging. Incorporation of imaging along with magnetic or robotic guidance may improve navigation accuracy and procedural safety. In the present study, the in vivo performance of a novel remote controlled Robot Assisted Cardiac Navigation System (RACN) was evaluated in a porcine model. The navigation catheter and target sensor were advanced to the right atrium using fluoroscopic and intra-cardiac echo guidance. The target sensor was positioned at three target locations in the right atrium (RA) and the navigation task was completed by an experienced physician using both manual and RACN guidance. The navigation time, final distance between the catheter tip and target sensor, and variability in final catheter tip position were determined and compared for manual and RACN guided navigation. The experiments were completed in three animals and five measurements recorded for each target location. The mean distance (mm) between catheter tip and target sensor at the end of the navigation task was significantly less using RACN guidance compared with manual navigation (5.02 ± 0.31 vs. 9.66 ± 2.88, p = 0.050 for high RA, 9.19 ± 1.13 vs. 13.0 ± 1.00, p = 0.011 for low RA and 6.77 ± 0.59 vs. 15.66 ± 2.51, p = 0.003 for tricuspid valve annulus). The average time (s) needed to complete the navigation task was significantly longer by RACN guided navigation compared with manual navigation (43.31 ± 18.19 vs. 13.54 ± 1.36, p = 0.047 for high RA, 43.71 ± 11.93 vs. 22.71 ± 3.79, p = 0.043 for low RA and 37.84 ± 3.71 vs. 16.13 ± 4.92, p = 0.003 for tricuspid valve annulus. RACN guided navigation resulted in greater consistency in performance compared with manual navigation as evidenced by lower variability in final distance measurements (0.41 vs. 0.99 mm, p = 0.04). This study demonstrated the safety and feasibility of the RACN system for cardiac navigation. The results demonstrated that RACN performed comparably with manual navigation, with improved precision and consistency for targets located in and near the right atrial chamber. Copyright © 2011 John Wiley & Sons, Ltd.
HERMIES-I: a mobile robot for navigation and manipulation experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.; Barhen, J.; de Saussure, G.
1985-01-01
The purpose of this paper is to report the current status of investigations ongoing at the Center for Engineering Systems Advanced Research (CESAR) in the areas of navigation and manipulation in unstructured environments. The HERMIES-I mobile robot, a prototype of a series which contains many of the major features needed for remote work in hazardous environments is discussed. Initial experimental work at CESAR has begun in the area of navigation. It briefly reviews some of the ongoing research in autonomous navigation and describes initial research with HERMIES-I and associated graphic simulation. Since the HERMIES robots will generally be composed ofmore » a variety of asynchronously controlled hardware components (such as manipulator arms, digital image sensors, sonars, etc.) it seems appropriate to consider future development of the HERMIES brain as a hypercube ensemble machine with concurrent computation and associated message passing. The basic properties of such a hypercube architecture are presented. Decision-making under uncertainty eventually permeates all of our work. Following a survey of existing analytical approaches, it was decided that a stronger theoretical basis is required. As such, this paper presents the framework for a recently developed hybrid uncertainty theory. 21 refs., 2 figs.« less
NASA Technical Reports Server (NTRS)
Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.
2003-01-01
Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.
Navigation within the heart and vessels in clinical practice.
Beyar, Rafael
2010-02-01
The field of interventional cardiology has developed at an unprecedented pace on account of the visual and imaging power provided by constantly improving biomedical technologies. Transcatheter-based technology is now routinely used for coronary revascularization and noncoronary interventions using balloon angioplasty, stents, and many other devices. In the early days of interventional practice, the operating physician had to manually navigate catheters and devices under fluoroscopic imaging and was exposed to radiation, with its comcomitant necessity for wearing heavy lead aprons for protection. Until recently, very little has changed in the way procedures have been carried out in the catheterization laboratory. The technological capacity to remotely manipulate devices, using robotic arms and computational tools, has been developed for surgery and other medical procedures. This has brought to practice the powerful combination of the abilities afforded by imaging, navigational tools, and remote control manipulation. This review covers recent developments in navigational tools for catheter positioning, electromagnetic mapping, magnetic resonance imaging (MRI)-based cardiac electrophysiological interventions, and navigation tools through coronary arteries.
Deictic primitives for general purpose navigation
NASA Technical Reports Server (NTRS)
Crismann, Jill D.
1994-01-01
A visually-based deictic primative used as an elementary command set for general purpose navigation was investigated. It was shown that a simple 'follow your eyes' scenario is sufficient for tracking a moving target. Limitations of velocity, acceleration, and modeling of the response of the mechanical systems were enforced. Realistic paths of the robots were produced during the simulation. Scientists could remotely command a planetary rover to go to a particular rock formation that may be interesting. Similarly an expert at plant maintenance could obtain diagnostic information remotely by using deictic primitives on a mobile are used in the deictic primitives, we could imagine that the exact same control software could be used for all of these applications.
NASA Astrophysics Data System (ADS)
Iakovleva, E. V.; Momot, B. A.
2017-10-01
The object of this study is to develop a power plant and an electric propulsion control system for autonomous remotely controlled vessels. The tasks of the study are as follows: to assess remotely controlled vessels usage reasonability, to define the requirements for this type of vessel navigation. In addition, the paper presents the analysis of technical diagnostics systems. The developed electric propulsion control systems for vessels should provide improved reliability and efficiency of the propulsion complex to ensure the profitability of remotely controlled vessels.
Autonomous Navigation by a Mobile Robot
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand
2005-01-01
ROAMAN is a computer program for autonomous navigation of a mobile robot on a long (as much as hundreds of meters) traversal of terrain. Developed for use aboard a robotic vehicle (rover) exploring the surface of a remote planet, ROAMAN could also be adapted to similar use on terrestrial mobile robots. ROAMAN implements a combination of algorithms for (1) long-range path planning based on images acquired by mast-mounted, wide-baseline stereoscopic cameras, and (2) local path planning based on images acquired by body-mounted, narrow-baseline stereoscopic cameras. The long-range path-planning algorithm autonomously generates a series of waypoints that are passed to the local path-planning algorithm, which plans obstacle-avoiding legs between the waypoints. Both the long- and short-range algorithms use an occupancy-grid representation in computations to detect obstacles and plan paths. Maps that are maintained by the long- and short-range portions of the software are not shared because substantial localization errors can accumulate during any long traverse. ROAMAN is not guaranteed to generate an optimal shortest path, but does maintain the safety of the rover.
Endocavity Ultrasound Probe Manipulators
Stoianovici, Dan; Kim, Chunwoo; Schäfer, Felix; Huang, Chien-Ming; Zuo, Yihe; Petrisor, Doru; Han, Misop
2014-01-01
We developed two similar structure manipulators for medical endocavity ultrasound probes with 3 and 4 degrees of freedom (DoF). These robots allow scanning with ultrasound for 3-D imaging and enable robot-assisted image-guided procedures. Both robots use remote center of motion kinematics, characteristic of medical robots. The 4-DoF robot provides unrestricted manipulation of the endocavity probe. With the 3-DoF robot the insertion motion of the probe must be adjusted manually, but the device is simpler and may also be used to manipulate external-body probes. The robots enabled a novel surgical approach of using intraoperative image-based navigation during robot-assisted laparoscopic prostatectomy (RALP), performed with concurrent use of two robotic systems (Tandem, T-RALP). Thus far, a clinical trial for evaluation of safety and feasibility has been performed successfully on 46 patients. This paper describes the architecture and design of the robots, the two prototypes, control features related to safety, preclinical experiments, and the T-RALP procedure. PMID:24795525
Autonomous mobile robot for radiologic surveys
Dudar, A.M.; Wagner, D.G.; Teese, G.D.
1994-06-28
An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures.
Autonomous mobile robot for radiologic surveys
Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.
1994-01-01
An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.
Robotic positioning of standard electrophysiology catheters: a novel approach to catheter robotics.
Knight, Bradley; Ayers, Gregory M; Cohen, Todd J
2008-05-01
Robotic systems have been developed to manipulate and position electrophysiology (EP) catheters remotely. One limitation of existing systems is their requirement for specialized catheters or sheaths. We evaluated a system (Catheter Robotics Remote Catheter Manipulation System [RCMS], Catheter Robotics, Inc., Budd Lake, New Jersey) that manipulates conventional EP catheters placed through standard introducer sheaths. The remote controller functions much like the EP catheter handle, and the system permits repeated catheter disengagement for manual manipulation without requiring removal of the catheter from the body. This study tested the hypothesis that the RCMS would be able to safely and effectively position catheters at various intracardiac sites and obtain thresholds and electrograms similar to those obtained with manual catheter manipulation. Two identical 7 Fr catheters (Blazer II; Boston Scientific Corp., Natick, Massachusetts) were inserted into the right femoral veins of 6 mongrel dogs through separate, standard 7 Fr sheaths. The first catheter was manually placed at a right ventricular endocardial site. The second catheter handle was placed in the mating holder of the RCMS and moved to approximately the same site as the first catheter using the Catheter Robotics RCMS. The pacing threshold was determined for each catheter. This sequence was performed at 2 right atrial and 2 right ventricular sites. The distance between the manually and robotically placed catheters tips was measured, and pacing thresholds and His-bundle recordings were compared. The heart was inspected at necropsy for signs of cardiac perforation or injury. Compared to manual positioning, remote catheter placement produced the same pacing threshold at 7/24 sites, a lower threshold at 11/24 sites, and a higher threshold at only 6/24 sites (p > 0.05). The average distance between catheter tips was 0.46 +/- 0.32 cm (median 0.32, range 0.13-1.16 cm). There was no difference between right atrial and right ventricular sites (p > 0.05). His-bundle electrograms were equal in amplitude and timing. Further, the remote navigation catheter was able to be disengaged, manually manipulated, then reengaged in the robot without issue. There was no evidence of perforation. The Catheter Robotics remote catheter manipulation system, which uses conventional EP catheters and introducer sheaths, appears to be safe and effective at directing EP catheters to intracardiac sites and achieving pacing thresholds and electrograms equivalent to manually placed catheters. Further clinical studies are needed to confirm these observations.
Mini AERCam Inspection Robot for Human Space Missions
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.; Duran, Steve; Mitchell, Jennifer D.
2004-01-01
The Engineering Directorate of NASA Johnson Space Center has developed a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam free flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35 pound, 14 inch AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations including automatic stationkeeping and point-to-point maneuvering. Mini AERCam is designed to fulfill the unique requirements and constraints associated with using a free flyer to perform external inspections and remote viewing of human spacecraft operations. This paper describes the application of Mini AERCam for stand-alone spacecraft inspection, as well as for roles on teams of humans and robots conducting future space exploration missions.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1991-01-01
The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.
Robotic vehicles for planetary exploration
NASA Astrophysics Data System (ADS)
Wilcox, Brian; Matthies, Larry; Gennery, Donald; Cooper, Brian; Nguyen, Tam; Litwin, Todd; Mishkin, Andrew; Stone, Henry
A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets.
Robotic vehicles for planetary exploration
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Matthies, Larry; Gennery, Donald; Cooper, Brian; Nguyen, Tam; Litwin, Todd; Mishkin, Andrew; Stone, Henry
1992-01-01
A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets.
Flexible robotics: a new paradigm.
Aron, Monish; Haber, Georges-Pascal; Desai, Mihir M; Gill, Inderbir S
2007-05-01
The use of robotics in urologic surgery has seen exponential growth over the last 5 years. Existing surgical robots operate rigid instruments on the master/slave principle and currently allow extraluminal manipulations and surgical procedures. Flexible robotics is an entirely novel paradigm. This article explores the potential of flexible robotic platforms that could permit endoluminal and transluminal surgery in the future. Computerized catheter-control systems are being developed primarily for cardiac applications. This development is driven by the need for precise positioning and manipulation of the catheter tip in the three-dimensional cardiovascular space. Such systems employ either remote navigation in a magnetic field or a computer-controlled electromechanical flexible robotic system. We have adapted this robotic system for flexible ureteropyeloscopy and have to date completed the initial porcine studies. Flexible robotics is on the horizon. It has potential for improved scope-tip precision, superior operative ergonomics, and reduced occupational radiation exposure. In the near future, in urology, we believe that it holds promise for endoluminal therapeutic ureterorenoscopy. Looking further ahead, within the next 3-5 years, it could enable transluminal surgery.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.
Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning
2018-03-16
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.
Robot for Investigations and Assessments of Nuclear Areas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanaan, Daniel; Dogny, Stephane
RIANA is a remote controlled Robot dedicated for Investigations and Assessments of Nuclear Areas. The development of RIANA is motivated by the need to have at disposal a proven robot, tested in hot cells; a robot capable of remotely investigate and characterise the inside of nuclear facilities in order to collect efficiently all the required data in the shortest possible time. It is based on a wireless medium sized remote carrier that may carry a wide variety of interchangeable modules, sensors and tools. It is easily customised to match specific requirements and quickly configured depending on the mission and themore » operator's preferences. RIANA integrates localisation and navigation systems. The robot will be able to generate / update a 2D map of its surrounding and exploring areas. The position of the robot is given accurately on the map. Furthermore, the robot will be able to autonomously calculate, define and follow a trajectory between 2 points taking into account its environment and obstacles. The robot is configurable to manage obstacles and restrict access to forbidden areas. RIANA allows an advanced control of modules, sensors and tools; all collected data (radiological and measured data) are displayed in real time in different format (chart, on the generated map...) and stored in a single place so that may be exported in a convenient format for data processing. This modular design gives RIANA the flexibility to perform multiple investigation missions where humans cannot work such as: visual inspections, dynamic localization and 2D mapping, characterizations and nuclear measurements of floor and walls, non destructive testing, samples collection: solid and liquid. The benefits of using RIANA are: - reducing the personnel exposures by limiting the manual intervention time, - minimizing the time and reducing the cost of investigation operations, - providing critical inputs to set up and optimize cleanup and dismantling operations. (authors)« less
Sirintrapun, Sahussapont Joseph; Rudomina, Dorota; Mazzella, Allix; Feratovic, Rusmir; Alago, William; Siegelbaum, Robert; Lin, Oscar
2017-01-01
Background: The first satellite center to offer interventional radiology procedures at Memorial Sloan Kettering Cancer Center opened in October 2014. Two of the procedures offered, fine needle aspirations and core biopsies, required rapid on-site cytologic evaluation of smears and biopsy touch imprints for cellular content and adequacy. The volume and frequency of such evaluations did not justify hiring on-site cytotechnologists, and therefore, a dynamic robotic telecytology (TC) solution was created. In this technical article, we present a detailed description of our implementation of robotic TC. Methods: Pathology devised the remote robotic TC solution after acknowledging that it would not be cost effective to staff cytotechnologists on-site at the satellite location. Sakura VisionTek was selected as our robotic TC solution. In addition to configuration of the dynamic robotic TC solution, pathology realized integrating the technology solution into operations would require a multidisciplinary effort and reevaluation of existing staffing and workflows. Results: Extensively described are the architectural framework and multidisciplinary process re-design, created to navigate the constraints of our technical, cultural, and organizational environment. Also reviewed are the benefits and challenges associated with available desktop sharing solutions, particularly accounting for information security concerns. Conclusions: Dynamic robotic TC is effective for immediate evaluations performed without on-site cytotechnology staff. Our goal is providing an extensive perspective of the implementation process, particularly technical, cultural, and operational constraints. Through this perspective, our template can serve as an extensible blueprint for other centers interested in implementing robotic TC without on-site cytotechnologists. PMID:28966832
Sirintrapun, Sahussapont Joseph; Rudomina, Dorota; Mazzella, Allix; Feratovic, Rusmir; Alago, William; Siegelbaum, Robert; Lin, Oscar
2017-01-01
The first satellite center to offer interventional radiology procedures at Memorial Sloan Kettering Cancer Center opened in October 2014. Two of the procedures offered, fine needle aspirations and core biopsies, required rapid on-site cytologic evaluation of smears and biopsy touch imprints for cellular content and adequacy. The volume and frequency of such evaluations did not justify hiring on-site cytotechnologists, and therefore, a dynamic robotic telecytology (TC) solution was created. In this technical article, we present a detailed description of our implementation of robotic TC. Pathology devised the remote robotic TC solution after acknowledging that it would not be cost effective to staff cytotechnologists on-site at the satellite location. Sakura VisionTek was selected as our robotic TC solution. In addition to configuration of the dynamic robotic TC solution, pathology realized integrating the technology solution into operations would require a multidisciplinary effort and reevaluation of existing staffing and workflows. Extensively described are the architectural framework and multidisciplinary process re-design, created to navigate the constraints of our technical, cultural, and organizational environment. Also reviewed are the benefits and challenges associated with available desktop sharing solutions, particularly accounting for information security concerns. Dynamic robotic TC is effective for immediate evaluations performed without on-site cytotechnology staff. Our goal is providing an extensive perspective of the implementation process, particularly technical, cultural, and operational constraints. Through this perspective, our template can serve as an extensible blueprint for other centers interested in implementing robotic TC without on-site cytotechnologists.
Crew/Robot Coordinated Planetary EVA Operations at a Lunar Base Analog Site
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Bluethmann, W. J.; Delgado, F. J.; Herrera, E.; Kosmo, J. J.; Janoiko, B. A.; Wilcox, B. H.; Townsend, J. A.; Matthews, J. B.;
2007-01-01
Under the direction of NASA's Exploration Technology Development Program, robots and space suited subjects from several NASA centers recently completed a very successful demonstration of coordinated activities indicative of base camp operations on the lunar surface. For these activities, NASA chose a site near Meteor Crater, Arizona close to where Apollo Astronauts previously trained. The main scenario demonstrated crew returning from a planetary EVA (extra-vehicular activity) to a temporary base camp and entering a pressurized rover compartment while robots performed tasks in preparation for the next EVA. Scenario tasks included: rover operations under direct human control and autonomous modes, crew ingress and egress activities, autonomous robotic payload removal and stowage operations under both local control and remote control from Houston, and autonomous robotic navigation and inspection. In addition to the main scenario, participants had an opportunity to explore additional robotic operations: hill climbing, maneuvering heaving loads, gathering geo-logical samples, drilling, and tether operations. In this analog environment, the suited subjects and robots experienced high levels of dust, rough terrain, and harsh lighting.
Bio-robots automatic navigation with electrical reward stimulation.
Sun, Chao; Zhang, Xinlu; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2012-01-01
Bio-robots that controlled by outer stimulation through brain computer interface (BCI) suffer from the dependence on realtime guidance of human operators. Current automatic navigation methods for bio-robots focus on the controlling rules to force animals to obey man-made commands, with animals' intelligence ignored. This paper proposes a new method to realize the automatic navigation for bio-robots with electrical micro-stimulation as real-time rewards. Due to the reward-seeking instinct and trial-and-error capability, bio-robot can be steered to keep walking along the right route with rewards and correct its direction spontaneously when rewards are deprived. In navigation experiments, rat-robots learn the controlling methods in short time. The results show that our method simplifies the controlling logic and realizes the automatic navigation for rat-robots successfully. Our work might have significant implication for the further development of bio-robots with hybrid intelligence.
Navigation strategies for multiple autonomous mobile robots moving in formation
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1991-01-01
The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.
Terrain interaction with the quarter scale beam walker
NASA Technical Reports Server (NTRS)
Chun, Wendell H.; Price, S.; Spiessbach, A.
1990-01-01
Frame walkers are a class of mobile robots that are robust and capable mobility platforms. Variations of the frame walker robot are in commercial use today. Komatsu Ltd. of Japan developed the Remotely Controlled Underwater Surveyor (ReCUS) and Normed Shipyards of France developed the Marine Robot (RM3). Both applications of the frame walker concept satisfied robotic mobility requirements that could not be met by a wheeled or tracked design. One vehicle design concept that falls within this class of mobile robots is the walking beam. A one-quarter scale prototype of the walking beam was built by Martin Marietta to evaluate the potential merits of utilizing the vehicle as a planetary rover. The initial phase of prototype rover testing was structured to evaluate the mobility performance aspects of the vehicle. Performance parameters such as vehicle power, speed, and attitude control were evaluated as a function of the environment in which the prototype vehicle was tested. Subsequent testing phases will address the integrated performance of the vehicle and a local navigation system.
Terrain Interaction With The Quarter Scale Beam Walker
NASA Astrophysics Data System (ADS)
Chun, Wendell H.; Price, R. S.; Spiessbach, Andrew J.
1990-03-01
Frame walkers are a class of mobile robots that are robust and capable mobility platforms. Variations of the frame walker robot are in commercial use today. Komatsu Ltd. of Japan developed the Remotely Controlled Underwater Surveyor (ReCUS) and Normed Shipyards of France developed the Marine Robot (RM3). Both applications of the frame walker concept satisfied robotic mobility requirements that could not be met by a wheeled or tracked design. One vehicle design concept that falls within this class of mobile robots is the walking beam. A one-quarter scale prototype of the walking beam was built by Martin Marietta to evaluate the potential merits of utilizing the vehicle as a planetary rover. The initial phase of prototype rover testing was structured to evaluate the mobility performance aspects of the vehicle. Performance parameters such as vehicle power, speed, and attitude control were evaluated as a function of the environment in which the prototype vehicle was tested. Subsequent testing phases will address the integrated performance of the vehicle and a local navigation system.
Mobile Robot Designed with Autonomous Navigation System
NASA Astrophysics Data System (ADS)
An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin
2017-10-01
With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.
Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques
NASA Astrophysics Data System (ADS)
Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.
1999-08-01
A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.
Automating CapCom Using Mobile Agents and Robotic Assistants
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhaus, Maarten; Alena, Richard L.; Berrios, Daniel; Dowding, John; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail
2005-01-01
We have developed and tested an advanced EVA communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. This system, called Mobile Agents (MA), is voice controlled and provides information verbally to the astronauts through programs called personal agents. The system partly automates the role of CapCom in Apollo-including monitoring and managing EVA navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. EVA data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in the context of use, including six years of ethnographic observation of field geology. Our approach is to develop automation that supports the human work practices, allowing people to do what they do well, and to work in ways they are most familiar. Field experiments in Utah have enabled empirically discovering requirements and testing alternative technologies and protocols. This paper reports on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to move and wait in various locations to serve as a relay on the wireless network. The MA system is applicable to many space work situations that involve creating and navigating from maps (including configuring equipment for local topology), interacting with piloted and unpiloted rovers, adapting to environmental conditions, and remote team collaboration involving people and robots.
Intelligent navigation and accurate positioning of an assist robot in indoor environments
NASA Astrophysics Data System (ADS)
Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke
2017-12-01
Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.
Robotic Inspection System for Non-Destructive Evaluation (nde) of Pipes
NASA Astrophysics Data System (ADS)
Mackenzie, L. D.; Pierce, S. G.; Hayward, G.
2009-03-01
The demand for remote inspection of pipework in the processing cells of nuclear plant provides significant challenges of access, navigation, inspection technique and data communication. Such processing cells typically contain several kilometres of densely packed pipework whose actual physical layout may be poorly documented. Access to these pipes is typically afforded through the radiation shield via a small removable concrete plug which may be several meters from the actual inspection site, thus considerably complicating practical inspection. The current research focuses on the robotic deployment of multiple NDE payloads for weld inspection along non-ferritic steel pipework (thus precluding use of magnetic traction options). A fully wireless robotic inspection platform has been developed that is capable of travelling along the outside of a pipe at any orientation, while avoiding obstacles such as pipe hangers and delivering a variety of NDE payloads. An eddy current array system provides rapid imaging capabilities for surface breaking defects while an on-board camera, in addition to assisting with navigation tasks, also allows real time image processing to identify potential defects. All sensor data can be processed by the embedded microcontroller or transmitted wirelessly back to the point of access for post-processing analysis.
Robot navigation research at CESAR (Center for Engineering Systems Advanced Research)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, D.L.; de Saussure, G.; Pin, F.G.
1989-01-01
A considerable amount of work has been reported on the problem of robot navigation in known static terrains. Algorithms have been proposed and implemented to search for an optimum path to the goal, taking into account the finite size and shape of the robot. Not as much work has been reported on robot navigation in unknown, unstructured, or dynamic environments. A robot navigating in an unknown environment must explore with its sensors, construct an abstract representation of its global environment to plan a path to the goal, and update or revise its plan based on accumulated data obtained and processedmore » in real-time. The core of the navigation program for the CESAR robots is a production system developed on the expert-system-shell CLIPS which runs on an NCUBE hypercube on board the robot. The production system can call on C-compiled navigation procedures. The production rules can read the sensor data and address the robot's effectors. This architecture was found efficient and flexible for the development and testing of the navigation algorithms; however, in order to process intelligently unexpected emergencies, it was found necessary to be able to control the production system through externally generated asynchronous data. This led to the design of a new asynchronous production system, APS, which is now being developed on the robot. This paper will review some of the navigation algorithms developed and tested at CESAR and will discuss the need for the new APS and how it is being integrated into the robot architecture. 18 refs., 3 figs., 1 tab.« less
Situationally driven local navigation for mobile robots. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Slack, Marc Glenn
1990-01-01
For mobile robots to autonomously accommodate dynamically changing navigation tasks in a goal-directed fashion, they must employ navigation plans. Any such plan must provide for the robot's immediate and continuous need for guidance while remaining highly flexible in order to avoid costly computation each time the robot's perception of the world changes. Due to the world's uncertainties, creation and maintenance of navigation plans cannot involve arbitrarily complex processes, as the robot's perception of the world will be in constant flux, requiring modifications to be made quickly if they are to be of any use. This work introduces navigation templates (NaT's) which are building blocks for the construction and maintenance of rough navigation plans which capture the relationship that objects in the world have to the current navigation task. By encoding only the critical relationship between the objects in the world and the navigation task, a NaT-based navigation plan is highly flexible; allowing new constraints to be quickly incorporated into the plan and existing constraints to be updated or deleted from the plan. To satisfy the robot's need for immediate local guidance, the NaT's forming the current navigation plan are passed to a transformation function. The transformation function analyzes the plan with respect to the robot's current location to quickly determine (a few times a second) the locally preferred direction of travel. This dissertation presents NaT's and the transformation function as well as the needed support systems to demonstrate the usefulness of the technique for controlling the actions of a mobile robot operating in an uncertain world.
2015-08-01
Navigational and Robot -Monitoring Tasks by Gina Pomranky-Hartnett, Linda R Elliott, Bruce JP Mortimer, Greg R Mort, Rodger A Pettitt, and Gary A...Tactor Display during Simultaneous Navigational and Robot -Monitoring Tasks by Gina Pomranky-Hartnett, Linda R Elliott, and Rodger A Pettitt...2014–31 March 2015 4. TITLE AND SUBTITLE Soldier-Based Assessment of a Dual-Row Tactor Display during Simultaneous Navigational and Robot -Monitoring
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU
Dou, Lihua; Su, Zhong; Liu, Ning
2018-01-01
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515
NASA Technical Reports Server (NTRS)
Balabanovic, Marko; Becker, Craig; Morse, Sarah K.; Nourbakhsh, Illah R.
1994-01-01
The success of every mobile robot application hinges on the ability to navigate robustly in the real world. The problem of robust navigation is separable from the challenges faced by any particular robot application. We offer the Real-World Navigator as a solution architecture that includes a path planner, a map-based localizer, and a motion control loop that combines reactive avoidance modules with deliberate goal-based motion. Our architecture achieves a high degree of reliability by maintaining and reasoning about an explicit description of positional uncertainty. We provide two implementations of real-world robot systems that incorporate the Real-World Navigator. The Vagabond Project culminated in a robot that successfully navigated a portion of the Stanford University campus. The Scimmer project developed successful entries for the AIAA 1993 Robotics Competition, placing first in one of the two contests entered.
Robot navigation research using the HERMIES mobile robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, D.L.
1989-01-01
In recent years robot navigation has attracted much attention from researchers around the world. Not only are theoretical studies being simulated on sophisticated computers, but many mobile robots are now used as test vehicles for these theoretical studies. Various algorithms have been perfected for navigation in a known static environment; but navigation in an unknown and dynamic environment poses a much more challenging problem for researchers. Many different methodologies have been developed for autonomous robot navigation, but each methodology is usually restricted to a particular type of environment. One important research focus of the Center for Engineering Systems Advanced researchmore » (CESAR) at Oak Ridge National Laboratory, is autonomous navigation in unknown and dynamic environments using the series of HERMIES mobile robots. The research uses an expert system for high-level planning interfaced with C-coded routines for implementing the plans, and for quick processing of data requested by the expert system. In using this approach, the navigation is not restricted to one methodology since the expert system can activate a rule module for the methodology best suited for the current situation. Rule modules can be added the rule base as they are developed and tested. Modules are being developed or enhanced for navigating from a map, searching for a target, exploring, artificial potential-field navigation, navigation using edge-detection, etc. This paper will report on the various rule modules and methods of navigation in use, or under development at CESAR, using the HERMIES-IIB robot as a testbed. 13 refs., 5 figs., 1 tab.« less
A Robot for Coastal Marine Studies Under Hostile Conditions
NASA Astrophysics Data System (ADS)
Consi, T. R.
2012-12-01
Robots have long been used for scientific exploration of extremely remote environments such as planetary surfaces and the deep ocean. In addition to these physically remote places, there are many environments that are transiently remote in the sense that they are inaccessible to humans for a period of time. Coastal marine environments fall into this category. While quite accessible (and enjoyable) during good weather, the coast can become as remote as the moon when it is impacted by severe storms or hurricanes. For near shore and shallow water marine science unmanned underwater ground vehicles (UUGVs) are the robots of choice for reliable access under a variety of conditions. Ground vehicles are inherently amphibious being able to operate in complex coastal environments that can range from the completely dry beach, through the transiently wet swash zone, into the surf zone and beyond. During storms, UUGVs provide stable sensor platforms resistant to waves and currents by virtue of being locked to the substrate. In such situations free-swimming robots would be swept away. Mobility during storms enables a UUGV to orient itself to optimally resist forces that would dislodge fixed, moored platforms. Mobility can also enable a UUGV to either avoid burial, or unbury itself after a storm. Finally, the ability to submerge provides a great advantage over buoys and surface vehicles which would be smashed by heavy wave action. We have developed a prototype UUGV to enable new science in the surf zone and other shallow water environments. Named LMAR for Lake Michigan Amphibious Robot, it is designed to be deployed from the dry beach, enter the water to perform a near-shore survey, and return to the deployment point for recovery. The body of the robot is a heavy flattened box (base dimensions: 1.07 m X 1.10 m X .393 m, dry weight: ~127 kg, displacement: ~ 45 kg) with a low center of gravity for stability and robust construction to withstand waves and currents. It is topped by a 1.5 m surface penetrating mast which currently limits the operational depth, although the core vehicle can be deployed to depths in excess of 10 m. Propulsion is accomplished with two DC brushless motors driving six wide heavy tread pneumatic wheels, three on each side. Power is provided by NiMH batteries. An onboard computer controls propulsion, navigation and communications. Guidance and navigation utilize inertial sensors, an electronic compass and a GPS unit mounted on the mast. A scientist onshore can monitor data from the scientific payload as well as command the robot through a mast-mounted radio Ethernet bridge. Standard, off the shelf oceanographic sensors such as sondes and ADCPs can easily be integrated onto the robot making it a versatile sensing platform. We have successfully deployed the vehicle off a sandy beach in Lake Michigan where it has performed lawn-mower surveys in the surf zone. LMAR's design and field test results will be presented along with a discussion of how to further harden the vehicle for deployment in storms.
Experimental Semiautonomous Vehicle
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Mishkin, Andrew H.; Litwin, Todd E.; Matthies, Larry H.; Cooper, Brian K.; Nguyen, Tam T.; Gat, Erann; Gennery, Donald B.; Firby, Robert J.; Miller, David P.;
1993-01-01
Semiautonomous rover vehicle serves as testbed for evaluation of navigation and obstacle-avoidance techniques. Designed to traverse variety of terrains. Concepts developed applicable to robots for service in dangerous environments as well as to robots for exploration of remote planets. Called Robby, vehicle 4 m long and 2 m wide, with six 1-m-diameter wheels. Mass of 1,200 kg and surmounts obstacles as large as 1 1/2 m. Optimized for development of machine-vision-based strategies and equipped with complement of vision and direction sensors and image-processing computers. Front and rear cabs steer and roll with respect to centerline of vehicle. Vehicle also pivots about central axle, so wheels comply with almost any terrain.
Perception for mobile robot navigation: A survey of the state of the art
NASA Technical Reports Server (NTRS)
Kortenkamp, David
1994-01-01
In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.
Buttz, James H.; Shirey, David L.; Hayward, David R.
2003-01-01
A robotic vehicle system for terrain navigation mobility provides a way to climb stairs, cross crevices, and navigate across difficult terrain by coupling two or more mobile robots with a coupling device and controlling the robots cooperatively in tandem.
2004-01-01
The Medical Advisory Secretariat undertook a review of the evidence on the effectiveness and cost-effectiveness of computer assisted hip and knee arthroplasty. The two computer assisted arthroplasty systems that are the topics of this review are (1) navigation and (2) robotic-assisted hip and knee arthroplasty. Computer-assisted arthroplasty consists of navigation and robotic systems. Surgical navigation is a visualization system that provides positional information about surgical tools or implants relative to a target bone on a computer display. Most of the navigation-assisted arthroplasty devices that are the subject of this review are licensed by Health Canada. Robotic systems are active robots that mill bone according to information from a computer-assisted navigation system. The robotic-assisted arthroplasty devices that are the subject of this review are not currently licensed by Health Canada. The Cochrane and International Network of Agencies for Health Technology Assessment databases did not identify any health technology assessments on navigation or robotic-assisted hip or knee arthroplasty. The MEDLINE and EMBASE databases were searched for articles published between January 1, 1996 and November 30, 2003. This search produced 367 studies, of which 9 met the inclusion criteria. NAVIGATION-ASSISTED ARTHROPLASTY: Five studies were identified that examined navigation-assisted arthroplasty.A Level 1 evidence study from Germany found a statistically significant difference in alignment and angular deviation between navigation-assisted and free-hand total knee arthroplasty in favour of navigation-assisted surgery. However, the endpoints in this study were short-term. To date, the long-term effects (need for revision, implant longevity, pain, functional performance) are unknown.(1)A Level 2 evidence short-term study found that navigation-assisted total knee arthroplasty was significantly better than a non-navigated procedure for one of five postoperative measured angles.(2)A Level 2 evidence short-term study found no statistically significant difference in the variation of the abduction angle between navigation-assisted and conventional total hip arthroplasty.(3)Level 3 evidence observational studies of navigation-assisted total knee arthroplasty and unicompartmental knee arthroplasty have been conducted. Two studies reported that "the follow-up of the navigated prostheses is currently too short to know if clinical outcome or survival rates are improved. Longer follow-up is required to determine the respective advantages and disadvantages of both techniques."(4;5) ROBOTIC-ASSISTED ARTHROPLASTY: Four studies were identified that examined robotic-assisted arthroplasty.A Level 1 evidence study revealed that there was no statistically significant difference between functional hip scores at 24 months post implantation between patients who underwent robotic-assisted primary hip arthroplasty and those that were treated with manual implantation.(6)Robotic-assisted arthroplasty had advantages in terms of preoperative planning and the accuracy of the intraoperative procedure.(6)Patients who underwent robotic-assisted hip arthroplasty had a higher dislocation rate and more revisions.(6)Robotic-assisted arthroplasty may prove effective with certain prostheses (e.g., anatomic) because their use may result in less muscle detachment.(6)An observational study (Level 3 evidence) found that the incidence of severe embolic events during hip relocation was lower with robotic arthroplasty than with manual surgery.(7)An observational study (Level 3 evidence) found that there was no significant difference in gait analyses of patients who underwent robotic-assisted total hip arthroplasty using robotic surgery compared to patients who were treated with conventional cementless total hip arthroplasty.(8)An observational study (Level 3 evidence) compared outcomes of total knee arthroplasty between patients undergoing robotic surgery and patients who were historical controls. Brief, qualitative results suggested that there was much broader variation of angles after manual total knee arthroplasty compared to the robotic technique and that there was no difference in knee functional scores or implant position at the 3 and 6 month follow-up.(9).
Computer-Assisted Hip and Knee Arthroplasty. Navigation and Active Robotic Systems
2004-01-01
Executive Summary Objective The Medical Advisory Secretariat undertook a review of the evidence on the effectiveness and cost-effectiveness of computer assisted hip and knee arthroplasty. The two computer assisted arthroplasty systems that are the topics of this review are (1) navigation and (2) robotic-assisted hip and knee arthroplasty. The Technology Computer-assisted arthroplasty consists of navigation and robotic systems. Surgical navigation is a visualization system that provides positional information about surgical tools or implants relative to a target bone on a computer display. Most of the navigation-assisted arthroplasty devices that are the subject of this review are licensed by Health Canada. Robotic systems are active robots that mill bone according to information from a computer-assisted navigation system. The robotic-assisted arthroplasty devices that are the subject of this review are not currently licensed by Health Canada. Review Strategy The Cochrane and International Network of Agencies for Health Technology Assessment databases did not identify any health technology assessments on navigation or robotic-assisted hip or knee arthroplasty. The MEDLINE and EMBASE databases were searched for articles published between January 1, 1996 and November 30, 2003. This search produced 367 studies, of which 9 met the inclusion criteria. Summary of Findings Navigation-Assisted Arthroplasty Five studies were identified that examined navigation-assisted arthroplasty. A Level 1 evidence study from Germany found a statistically significant difference in alignment and angular deviation between navigation-assisted and free-hand total knee arthroplasty in favour of navigation-assisted surgery. However, the endpoints in this study were short-term. To date, the long-term effects (need for revision, implant longevity, pain, functional performance) are unknown.(1) A Level 2 evidence short-term study found that navigation-assisted total knee arthroplasty was significantly better than a non-navigated procedure for one of five postoperative measured angles.(2) A Level 2 evidence short-term study found no statistically significant difference in the variation of the abduction angle between navigation-assisted and conventional total hip arthroplasty.(3) Level 3 evidence observational studies of navigation-assisted total knee arthroplasty and unicompartmental knee arthroplasty have been conducted. Two studies reported that “the follow-up of the navigated prostheses is currently too short to know if clinical outcome or survival rates are improved. Longer follow-up is required to determine the respective advantages and disadvantages of both techniques.”(4;5) Robotic-Assisted Arthroplasty Four studies were identified that examined robotic-assisted arthroplasty. A Level 1 evidence study revealed that there was no statistically significant difference between functional hip scores at 24 months post implantation between patients who underwent robotic-assisted primary hip arthroplasty and those that were treated with manual implantation.(6) Robotic-assisted arthroplasty had advantages in terms of preoperative planning and the accuracy of the intraoperative procedure.(6) Patients who underwent robotic-assisted hip arthroplasty had a higher dislocation rate and more revisions.(6) Robotic-assisted arthroplasty may prove effective with certain prostheses (e.g., anatomic) because their use may result in less muscle detachment.(6) An observational study (Level 3 evidence) found that the incidence of severe embolic events during hip relocation was lower with robotic arthroplasty than with manual surgery.(7) An observational study (Level 3 evidence) found that there was no significant difference in gait analyses of patients who underwent robotic-assisted total hip arthroplasty using robotic surgery compared to patients who were treated with conventional cementless total hip arthroplasty.(8) An observational study (Level 3 evidence) compared outcomes of total knee arthroplasty between patients undergoing robotic surgery and patients who were historical controls. Brief, qualitative results suggested that there was much broader variation of angles after manual total knee arthroplasty compared to the robotic technique and that there was no difference in knee functional scores or implant position at the 3 and 6 month follow-up.(9) PMID:23074452
Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2013-01-01
Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.
Interaction dynamics of multiple mobile robots with simple navigation strategies
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.
Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori
2017-01-01
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803
Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori
2017-08-15
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.
Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.
Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M
2015-01-01
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
Juang, Chia-Feng; Lai, Min-Ge; Zeng, Wan-Ting
2015-09-01
This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.
Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2013-01-01
Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.
Riga, Celia; Bicknell, Colin; Cheshire, Nicholas; Hamady, Mohamad
2009-04-01
To report the initial clinical use of a robotically steerable catheter during endovascular aneurysm repair (EVAR) in order to assess this novel and innovative approach in a clinical setting. Following a series of in-vitro studies and procedure rehearsals using a pulsatile silicon aneurysm model, a 78-year-old man underwent robot-assisted EVAR of a 5.9-cm infrarenal abdominal aortic aneurysm. During the standard procedure, a 14-F remotely steerable robotic catheter was used to successfully navigate through the aneurysm sac, cannulate the contralateral limb of a bifurcated stent-graft under fluoroscopic guidance, and place stiff wires using fine and controlled movements. The procedure was completed successfully. There were no postoperative complications, and computed tomographic angiography prior to discharge and at 3 months confirmed that the stent-graft remained in good position, with no evidence of an endoleak. EVAR using robotically-steerable catheters is feasible. This technology may simplify more complex procedures by increasing the accuracy of vessel cannulation and perhaps reduce procedure times and radiation exposure to the patient and operator.
Flexible robotic catheters in the visceral segment of the aorta: advantages and limitations.
Li, Mimi M; Hamady, Mohamad S; Bicknell, Colin D; Riga, Celia V
2018-06-01
Flexible robotic catheters are an emerging technology which provide an elegant solution to the challenges of conventional endovascular intervention. Originally developed for interventional cardiology and electrophysiology procedures, remotely steerable robotic catheters such as the Magellan system enable greater precision and enhanced stability during target vessel navigation. These technical advantages facilitate improved treatment of disease in the arterial tree, as well as allowing execution of otherwise unfeasible procedures. Occupational radiation exposure is an emerging concern with the use of increasingly complex endovascular interventions. The robotic systems offer an added benefit of radiation reduction, as the operator is seated away from the radiation source during manipulation of the catheter. Pre-clinical studies have demonstrated reduction in force and frequency of vessel wall contact, resulting in reduced tissue trauma, as well as improved procedural times. Both safety and feasibility have been demonstrated in early clinical reports, with the first robot-assisted fenestrated endovascular aortic repair in 2013. Following from this, the Magellan system has been used to successfully undertake a variety of complex aortic procedures, including fenestrated/branched endovascular aortic repair, embolization, and angioplasty.
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Land, sea, and air unmanned systems research and development at SPAWAR Systems Center Pacific
NASA Astrophysics Data System (ADS)
Nguyen, Hoa G.; Laird, Robin; Kogut, Greg; Andrews, John; Fletcher, Barbara; Webber, Todd; Arrieta, Rich; Everett, H. R.
2009-05-01
The Space and Naval Warfare (SPAWAR) Systems Center Pacific (SSC Pacific) has a long and extensive history in unmanned systems research and development, starting with undersea applications in the 1960s and expanding into ground and air systems in the 1980s. In the ground domain, we are addressing force-protection scenarios using large unmanned ground vehicles (UGVs) and fixed sensors, and simultaneously pursuing tactical and explosive ordnance disposal (EOD) operations with small man-portable robots. Technology thrusts include improving robotic intelligence and functionality, autonomous navigation and world modeling in urban environments, extended operational range of small teleoperated UGVs, enhanced human-robot interaction, and incorporation of remotely operated weapon systems. On the sea surface, we are pushing the envelope on dynamic obstacle avoidance while conforming to established nautical rules-of-the-road. In the air, we are addressing cooperative behaviors between UGVs and small vertical-takeoff- and-landing unmanned air vehicles (UAVs). Underwater applications involve very shallow water mine countermeasures, ship hull inspection, oceanographic data collection, and deep ocean access. Specific technology thrusts include fiber-optic communications, adaptive mission controllers, advanced navigation techniques, and concepts of operations (CONOPs) development. This paper provides a review of recent accomplishments and current status of a number of projects in these areas.
Constrained navigation for unmanned systems
NASA Astrophysics Data System (ADS)
Vasseur, Laurent; Gosset, Philippe; Carpentier, Luc; Marion, Vincent; Morillon, Joel G.; Ropars, Patrice
2005-05-01
The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales as the prime contractor, focuses on about 15 robotic themes which can provide an immediate "operational add-on value". The paper details the "constrained navigation" study (named TEL2), which main goal is to identify and test a well-balanced task sharing between man and machine to accomplish a robotic task that cannot be performed autonomously at the moment because of technological limitations. The chosen function is "obstacle avoidance" on rough ground and quite high speed (40 km/h). State of the art algorithms have been implemented to perform autonomous obstacle avoidance and following of forest borders, using scanner laser sensor and standard localization functions. Such an "obstacle avoidance" function works well most of the time, BUT fails sometimes. The study analyzed how the remote operator can manage such failures so that the system remains fully operationally reliable; he can act according to two ways: a) finely adjust the vehicle current heading; b) take the control of the vehicle "on the fly" (without stopping) and bring it back to autonomous behavior when motion is secured again. The paper also presents the results got from the military acceptance tests performed on French 4x4 DARDS ATD.
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
Iconic memory-based omnidirectional route panorama navigation.
Yagi, Yasushi; Imai, Kousuke; Tsuji, Kentaro; Yachida, Masahiko
2005-01-01
A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.
NASA Astrophysics Data System (ADS)
van Oosterom, Matthias Nathanaël; Engelen, Myrthe Adriana; van den Berg, Nynke Sjoerdtje; KleinJan, Gijs Hendrik; van der Poel, Henk Gerrit; Wendler, Thomas; van de Velde, Cornelis Jan Hadde; Navab, Nassir; van Leeuwen, Fijs Willem Bernhard
2016-08-01
Robot-assisted laparoscopic surgery is becoming an established technique for prostatectomy and is increasingly being explored for other types of cancer. Linking intraoperative imaging techniques, such as fluorescence guidance, with the three-dimensional insights provided by preoperative imaging remains a challenge. Navigation technologies may provide a solution, especially when directly linked to both the robotic setup and the fluorescence laparoscope. We evaluated the feasibility of such a setup. Preoperative single-photon emission computed tomography/X-ray computed tomography (SPECT/CT) or intraoperative freehand SPECT (fhSPECT) scans were used to navigate an optically tracked robot-integrated fluorescence laparoscope via an augmented reality overlay in the laparoscopic video feed. The navigation accuracy was evaluated in soft tissue phantoms, followed by studies in a human-like torso phantom. Navigation accuracies found for SPECT/CT-based navigation were 2.25 mm (coronal) and 2.08 mm (sagittal). For fhSPECT-based navigation, these were 1.92 mm (coronal) and 2.83 mm (sagittal). All errors remained below the <1-cm detection limit for fluorescence imaging, allowing refinement of the navigation process using fluorescence findings. The phantom experiments performed suggest that SPECT-based navigation of the robot-integrated fluorescence laparoscope is feasible and may aid fluorescence-guided surgery procedures.
Bourbakis, N G
1997-01-01
This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.
NASA Astrophysics Data System (ADS)
Hanford, Scott D.
Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.
ARK: Autonomous mobile robot in an industrial environment
NASA Technical Reports Server (NTRS)
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
1994-01-01
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
SLAM algorithm applied to robotics assistance for navigation in unknown environments.
Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo
2010-02-17
The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.
van der List, Jelle P; Chawla, Harshvardhan; Joskowicz, Leo; Pearle, Andrew D
2016-11-01
Recently, there is a growing interest in surgical variables that are intraoperatively controlled by orthopaedic surgeons, including lower leg alignment, component positioning and soft tissues balancing. Since more tight control over these factors is associated with improved outcomes of unicompartmental knee arthroplasty and total knee arthroplasty (TKA), several computer navigation and robotic-assisted systems have been developed. Although mechanical axis accuracy and component positioning have been shown to improve with computer navigation, no superiority in functional outcomes has yet been shown. This could be explained by the fact that many differences exist between the number and type of surgical variables these systems control. Most systems control lower leg alignment and component positioning, while some in addition control soft tissue balancing. Finally, robotic-assisted systems have the additional advantage of improving surgical precision. A systematic search in PubMed, Embase and Cochrane Library resulted in 40 comparative studies and three registries on computer navigation reporting outcomes of 474,197 patients, and 21 basic science and clinical studies on robotic-assisted knee arthroplasty. Twenty-eight of these comparative computer navigation studies reported Knee Society Total scores in 3504 patients. Stratifying by type of surgical variables, no significant differences were noted in outcomes between surgery with computer-navigated TKA controlling for alignment and component positioning versus conventional TKA (p = 0.63). However, significantly better outcomes were noted following computer-navigated TKA that also controlled for soft tissue balancing versus conventional TKA (mean difference 4.84, 95 % Confidence Interval 1.61, 8.07, p = 0.003). A literature review of robotic systems showed that these systems can, similarly to computer navigation, reliably improve lower leg alignment, component positioning and soft tissues balancing. Furthermore, two studies comparing robotic-assisted with computer-navigated surgery reported superiority of robotic-assisted surgery in controlling these factors. Manually controlling all these surgical variables can be difficult for the orthopaedic surgeon. Findings in this study suggest that computer navigation or robotic assistance may help managing these multiple variables and could improve outcomes. Future studies assessing the role of soft tissue balancing in knee arthroplasty and long-term follow-up studies assessing the role of computer-navigated and robotic-assisted knee arthroplasty are needed.
Environment exploration and SLAM experiment research based on ROS
NASA Astrophysics Data System (ADS)
Li, Zhize; Zheng, Wei
2017-11-01
Robots need to get the information of surrounding environment by means of map learning. SLAM or navigation based on mobile robots is developing rapidly. ROS (Robot Operating System) is widely used in the field of robots because of the convenient code reuse and open source. Numerous excellent algorithms of SLAM or navigation are ported to ROS package. hector_slam is one of them that can set up occupancy grid maps on-line fast with low computation resources requiring. Its characters above make the embedded handheld mapping system possible. Similarly, hector_navigation also does well in the navigation field. It can finish path planning and environment exploration by itself using only an environmental sensor. Combining hector_navigation with hector_slam can realize low cost environment exploration, path planning and slam at the same time
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-05-23
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.
Improvement of the insertion axis for cochlear implantation with a robot-based system.
Torres, Renato; Kazmitcheff, Guillaume; De Seta, Daniele; Ferrary, Evelyne; Sterkers, Olivier; Nguyen, Yann
2017-02-01
It has previously reported that alignment of the insertion axis along the basal turn of the cochlea was depending on surgeon' experience. In this experimental study, we assessed technological assistances, such as navigation or a robot-based system, to improve the insertion axis during cochlear implantation. A preoperative cone beam CT and a mastoidectomy with a posterior tympanotomy were performed on four temporal bones. The optimal insertion axis was defined as the closest axis to the scala tympani centerline avoiding the facial nerve. A neuronavigation system, a robot assistance prototype, and software allowing a semi-automated alignment of the robot were used to align an insertion tool with an optimal insertion axis. Four procedures were performed and repeated three times in each temporal bone: manual, manual navigation-assisted, robot-based navigation-assisted, and robot-based semi-automated. The angle between the optimal and the insertion tool axis was measured in the four procedures. The error was 8.3° ± 2.82° for the manual procedure (n = 24), 8.6° ± 2.83° for the manual navigation-assisted procedure (n = 24), 5.4° ± 3.91° for the robot-based navigation-assisted procedure (n = 24), and 3.4° ± 1.56° for the robot-based semi-automated procedure (n = 12). A higher accuracy was observed with the semi-automated robot-based technique than manual and manual navigation-assisted (p < 0.01). Combination of a navigation system and a manual insertion does not improve the alignment accuracy due to the lack of friendly user interface. On the contrary, a semi-automated robot-based system reduces both the error and the variability of the alignment with a defined optimal axis.
Neurosurgical robotic arm drilling navigation system.
Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai
2017-09-01
The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.
Robotic platform for traveling on vertical piping network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel
This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.
Coordinating sensing and local navigation
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1991-01-01
Based on Navigation Templates (or NaTs), this work presents a new paradigm for local navigation which addresses the noisy and uncertain nature of sensor data. Rather than creating a new navigation plan each time the robot's perception of the world changes, the technique incorporates perceptual changes directly into the existing navigation plan. In this way, the robot's navigation plan is quickly and continuously modified, resulting in actions that remain coordinated with its changing perception of the world.
Development of a Novel Locomotion Algorithm for Snake Robot
NASA Astrophysics Data System (ADS)
Khan, Raisuddin; Masum Billah, Md; Watanabe, Mitsuru; Shafie, A. A.
2013-12-01
A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation.
Learning Long-Range Vision for an Offroad Robot
2008-09-01
robot to perceive and navigate in an unstructured natural world is a difficult task. Without learning, navigation systems are short-range and extremely...unsupervised or weakly supervised learning methods are necessary for training general feature representations for natural scenes. The process was...the world looked dark, and Legos when I was weary. iii ABSTRACT Teaching a robot to perceive and navigate in an unstructured natural world is a
Spatial abstraction for autonomous robot navigation.
Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon
2015-09-01
Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.
Video. Natural Orifice Translumenal Endoscopic Surgery with a miniature in vivo surgical robot.
Lehman, Amy C; Dumpert, Jason; Wood, Nathan A; Visty, Abigail Q; Farritor, Shane M; Varnell, Brandon; Oleynikov, Dmitry
2009-07-01
The application of flexible endoscopy tools for Natural Orifice Translumenal Endoscopic Surgery (NOTES) is constrained due to limitations in dexterity, instrument insertion, navigation, visualization, and retraction. Miniature endolumenal robots can mitigate these constraints by providing a stable platform for visualization and dexterous manipulation. This video demonstrates the feasibility of using an endolumenal miniature robot to improve vision and to apply off-axis forces for task assistance in NOTES procedures. A two-armed miniature in vivo robot has been developed for NOTES. The robot is remotely controlled, has on-board cameras for guidance, and grasper and cautery end effectors for manipulation. Two basic configurations of the robot allow for flexibility during insertion and rigidity for visualization and tissue manipulation. Embedded magnets in the body of the robot and in an exterior surgical console are used for attaching the robot to the interior abdominal wall. This enables the surgeon to arbitrarily position the robot throughout a procedure. The visualization and task assistance capabilities of the miniature robot were demonstrated in a nonsurvivable NOTES procedure in a porcine model. An endoscope was used to create a transgastric incision and advance an overtube into the peritoneal cavity. The robot was then inserted through the overtube and into the peritoneal cavity using an endoscope. The surgeon successfully used the robot to explore the peritoneum and perform small-bowel dissection. This study has demonstrated the feasibility of inserting an endolumenal robot per os. Once deployed, the robot provided visualization and dexterous capabilities from multiple orientations. Further miniaturization and increased dexterity will enhance future capabilities.
Navigation and Robotics in Spinal Surgery: Where Are We Now?
Overley, Samuel C; Cho, Samuel K; Mehta, Ankit I; Arnold, Paul M
2017-03-01
Spine surgery has experienced much technological innovation over the past several decades. The field has seen advancements in operative techniques, implants and biologics, and equipment such as computer-assisted navigation and surgical robotics. With the arrival of real-time image guidance and navigation capabilities along with the computing ability to process and reconstruct these data into an interactive three-dimensional spinal "map", so too have the applications of surgical robotic technology. While spinal robotics and navigation represent promising potential for improving modern spinal surgery, it remains paramount to demonstrate its superiority as compared to traditional techniques prior to assimilation of its use amongst surgeons.The applications for intraoperative navigation and image-guided robotics have expanded to surgical resection of spinal column and intradural tumors, revision procedures on arthrodesed spines, and deformity cases with distorted anatomy. Additionally, these platforms may mitigate much of the harmful radiation exposure in minimally invasive surgery to which the patient, surgeon, and ancillary operating room staff are subjected.Spine surgery relies upon meticulous fine motor skills to manipulate neural elements and a steady hand while doing so, often exploiting small working corridors utilizing exposures that minimize collateral damage. Additionally, the procedures may be long and arduous, predisposing the surgeon to both mental and physical fatigue. In light of these characteristics, spine surgery may actually be an ideal candidate for the integration of navigation and robotic-assisted procedures.With this paper, we aim to critically evaluate the current literature and explore the options available for intraoperative navigation and robotic-assisted spine surgery. Copyright © 2016 by the Congress of Neurological Surgeons.
SLAM algorithm applied to robotics assistance for navigation in unknown environments
2010-01-01
Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735
Rationale and Roadmap for Moon Exploration
NASA Astrophysics Data System (ADS)
Foing, B. H.; ILEWG Team
We discuss the different rationale for Moon exploration. This starts with areas of scientific investigations: clues on the formation and evolution of rocky planets, accretion and bombardment in the inner solar system, comparative planetology processes (tectonic, volcanic, impact cratering, volatile delivery), records astrobiology, survival of organics; past, present and future life. The rationale includes also the advancement of instrumentation: Remote sensing miniaturised instruments; Surface geophysical and geochemistry package; Instrument deployment and robotic arm, nano-rover, sampling, drilling; Sample finder and collector. There are technologies in robotic and human exploration that are a drive for the creativity and economical competitivity of our industries: Mecha-electronics-sensors; Tele control, telepresence, virtual reality; Regional mobility rover; Autonomy and Navigation; Artificially intelligent robots, Complex systems, Man-Machine interface and performances. Moon-Mars Exploration can inspire solutions to global Earth sustained development: In-Situ Utilisation of resources; Establishment of permanent robotic infrastructures, Environmental protection aspects; Life sciences laboratories; Support to human exploration. We also report on the IAA Cosmic Study on Next Steps In Exploring Deep Space, and ongoing IAA Cosmic Studies, ILEWG/IMEWG ongoing activities, and we finally discuss possible roadmaps for robotic and human exploration, starting with the Moon-Mars missions for the coming decade, and building effectively on joint technology developments.
Lyons, Kenneth R; Joshi, Sanjay S
2013-06-01
Here we demonstrate the use of a new singlesignal surface electromyography (sEMG) brain-computer interface (BCI) to control a mobile robot in a remote location. Previous work on this BCI has shown that users are able to perform cursor-to-target tasks in two-dimensional space using only a single sEMG signal by continuously modulating the signal power in two frequency bands. Using the cursor-to-target paradigm, targets are shown on the screen of a tablet computer so that the user can select them, commanding the robot to move in different directions for a fixed distance/angle. A Wifi-enabled camera transmits video from the robot's perspective, giving the user feedback about robot motion. Current results show a case study with a C3-C4 spinal cord injury (SCI) subject using a single auricularis posterior muscle site to navigate a simple obstacle course. Performance metrics for operation of the BCI as well as completion of the telerobotic command task are developed. It is anticipated that this noninvasive and mobile system will open communication opportunities for the severely paralyzed, possibly using only a single sensor.
Development of a two wheeled self balancing robot with speech recognition and navigation algorithm
NASA Astrophysics Data System (ADS)
Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh
2016-07-01
This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.
Composite Configuration Interventional Therapy Robot for the Microwave Ablation of Liver Tumors
NASA Astrophysics Data System (ADS)
Cao, Ying-Yu; Xue, Long; Qi, Bo-Jin; Jiang, Li-Pei; Deng, Shuang-Cheng; Liang, Ping; Liu, Jia
2017-11-01
The existing interventional therapy robots for the microwave ablation of liver tumors have a poor clinical applicability with a large volume, low positioning speed and complex automatic navigation control. To solve above problems, a composite configuration interventional therapy robot with passive and active joints is developed. The design of composite configuration reduces the size of the robot under the premise of a wide range of movement, and the robot with composite configuration can realizes rapid positioning with operation safety. The cumulative error of positioning is eliminated and the control complexity is reduced by decoupling active parts. The navigation algorithms for the robot are proposed based on solution of the inverse kinematics and geometric analysis. A simulation clinical test method is designed for the robot, and the functions of the robot and the navigation algorithms are verified by the test method. The mean error of navigation is 1.488 mm and the maximum error is 2.056 mm, and the positioning time for the ablation needle is in 10 s. The experimental results show that the designed robot can meet the clinical requirements for the microwave ablation of liver tumors. The composite configuration is proposed in development of the interventional therapy robot for the microwave ablation of liver tumors, which provides a new idea for the structural design of medical robots.
Design of a laser navigation system for the inspection robot used in substation
NASA Astrophysics Data System (ADS)
Zhu, Jing; Sun, Yanhe; Sun, Deli
2017-01-01
Aimed at the deficiency of the magnetic guide and RFID parking system used by substation inspection robot now, a laser navigation system is designed, and the system structure, the method of map building and positioning are all introduced. The system performance is tested in a 500kV substation, and the result show that the repetitive precision of navigation system is precise enough to help the robot fulfill inspection tasks.
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation
Masmoudi, Mohamed Slim; Masmoudi, Mohamed
2016-01-01
This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path. PMID:27688748
Autonomous Rovers for Polar Science Campaigns
NASA Astrophysics Data System (ADS)
Lever, J. H.; Ray, L. E.; Williams, R. M.; Morlock, A. M.; Burzynski, A. M.
2012-12-01
We have developed and deployed two over-snow autonomous rovers able to conduct remote science campaigns on Polar ice sheets. Yeti is an 80-kg, four-wheel-drive (4WD) battery-powered robot with 3 - 4 hr endurance, and Cool Robot is a 60-kg 4WD solar-powered robot with unlimited endurance during Polar summers. Both robots navigate using GPS waypoint-following to execute pre-planned courses autonomously, and they can each carry or tow 20 - 160 kg instrument payloads over typically firm Polar snowfields. In 2008 - 12, we deployed Yeti to conduct autonomous ground-penetrating radar (GPR) surveys to detect hidden crevasses to help establish safe routes for overland resupply of research stations at South Pole, Antarctica, and Summit, Greenland. We also deployed Yeti with GPR at South Pole in 2011 to identify the locations of potentially hazardous buried buildings from the original 1950's-era station. Autonomous surveys remove personnel from safety risks posed during manual GPR surveys by undetected crevasses or buried buildings. Furthermore, autonomous surveys can yield higher quality and more comprehensive data than manual ones: Yeti's low ground pressure (20 kPa) allows it to cross thinly bridged crevasses or other voids without interrupting a survey, and well-defined survey grids allow repeated detection of buried voids to improve detection reliability and map their extent. To improve survey efficiency, we have automated the mapping of detected hazards, currently identified via post-survey manual review of the GPR data. Additionally, we are developing machine-learning algorithms to detect crevasses autonomously in real time, with reliability potentially higher than manual real-time detection. These algorithms will enable the rover to relay crevasse locations to a base station for near real-time mapping and decision-making. We deployed Cool Robot at Summit Station in 2005 to verify its mobility and power budget over Polar snowfields. Using solar power, this zero-emissions rover could travel more than 500 km per week during Polar summers and provide 100 - 200 W to power instrument payloads to help investigate the atmosphere, magnetosphere, glaciology and sub-glacial geology in Antarctica and Greenland. We are currently upgrading Cool Robot's navigation and solar-power systems and will deploy it during 2013 to map the emissions footprint around Summit Station to demonstrate its potential to execute long-endurance Polar science campaigns. These rovers could assist science traverses to chart safe routes into the interior of Antarctica and Greenland or conduct autonomous, remote science campaigns to extend spatial and temporal coverage for data collection. Our goals include 1,000 - 2,000-km summertime traverses of Antarctica and Greenland, safe navigation through 0.5-m amplitude sastrugi fields, survival in blizzards, and rover-network adaptation to research events of opportunity. We are seeking Polar scientists interested in autonomous, mobile data collection and can adapt the rovers to meet their requirements.
Determining navigability of terrain using point cloud data.
Cockrell, Stephanie; Lee, Gregory; Newman, Wyatt
2013-06-01
This paper presents an algorithm to identify features of the navigation surface in front of a wheeled robot. Recent advances in mobile robotics have brought about the development of smart wheelchairs to assist disabled people, allowing them to be more independent. These robots have a human occupant and operate in real environments where they must be able to detect hazards like holes, stairs, or obstacles. Furthermore, to ensure safe navigation, wheelchairs often need to locate and navigate on ramps. The algorithm is implemented on data from a Kinect and can effectively identify these features, increasing occupant safety and allowing for a smoother ride.
ERIC Educational Resources Information Center
Doty, Keith L.
1999-01-01
Research on neural networks and hippocampal function demonstrating how mammals construct mental maps and develop navigation strategies is being used to create Intelligent Autonomous Mobile Robots (IAMRs). Such robots are able to recognize landmarks and navigate without "vision." (SK)
Web Environment for Programming and Control of a Mobile Robot in a Remote Laboratory
ERIC Educational Resources Information Center
dos Santos Lopes, Maísa Soares; Gomes, Iago Pacheco; Trindade, Roque M. P.; da Silva, Alzira F.; de C. Lima, Antonio C.
2017-01-01
Remote robotics laboratories have been successfully used for engineering education. However, few of them use mobile robots to to teach computer science. This article describes a mobile robot Control and Programming Environment (CPE) and its pedagogical applications. The system comprises a remote laboratory for robotics, an online programming tool,…
Robot map building based on fuzzy-extending DSmT
NASA Astrophysics Data System (ADS)
Li, Xinde; Huang, Xinhan; Wu, Zuyu; Peng, Gang; Wang, Min; Xiong, Youlun
2007-11-01
With the extensive application of mobile robots in many different fields, map building in unknown environments has been one of the principal issues in the field of intelligent mobile robot. However, Information acquired in map building presents characteristics of uncertainty, imprecision and even high conflict, especially in the course of building grid map using sonar sensors. In this paper, we extended DSmT with Fuzzy theory by considering the different fuzzy T-norm operators (such as Algebraic Product operator, Bounded Product operator, Einstein Product operator and Default minimum operator), in order to develop a more general and flexible combinational rule for more extensive application. At the same time, we apply fuzzy-extended DSmT to mobile robot map building with the help of new self-localization method based on neighboring field appearance matching( -NFAM), to make the new tool more robust in very complex environment. An experiment is conducted to reconstruct the map with the new tool in indoor environment, in order to compare their performances in map building with four T-norm operators, when Pioneer II mobile robot runs along the same trace. Finally, a conclusion is reached that this study develops a new idea to extend DSmT, also provides a new approach for autonomous navigation of mobile robot, and provides a human-computer interactive interface to manage and manipulate the robot remotely.
Virtual local target method for avoiding local minimum in potential field based robot navigation.
Zou, Xi-Yong; Zhu, Jing
2003-01-01
A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments.
Evolutionary programming-based univector field navigation method for past mobile robots.
Kim, Y J; Kim, J H; Kwon, D S
2001-01-01
Most of navigation techniques with obstacle avoidance do not consider the robot orientation at the target position. These techniques deal with the robot position only and are independent of its orientation and velocity. To solve these problems this paper proposes a novel univector field method for fast mobile robot navigation which introduces a normalized two dimensional vector field. The method provides fast moving robots with the desired posture at the target position and obstacle avoidance. To obtain the sub-optimal vector field, a function approximator is used and trained by evolutionary programming. Two kinds of vector fields are trained, one for the final posture acquisition and the other for obstacle avoidance. Computer simulations and real experiments are carried out for a fast moving mobile robot to demonstrate the effectiveness of the proposed scheme.
Unmanned aerial systems for photogrammetry and remote sensing: A review
NASA Astrophysics Data System (ADS)
Colomina, I.; Molina, P.
2014-06-01
We discuss the evolution and state-of-the-art of the use of Unmanned Aerial Systems (UAS) in the field of Photogrammetry and Remote Sensing (PaRS). UAS, Remotely-Piloted Aerial Systems, Unmanned Aerial Vehicles or simply, drones are a hot topic comprising a diverse array of aspects including technology, privacy rights, safety and regulations, and even war and peace. Modern photogrammetry and remote sensing identified the potential of UAS-sourced imagery more than thirty years ago. In the last five years, these two sister disciplines have developed technology and methods that challenge the current aeronautical regulatory framework and their own traditional acquisition and processing methods. Navety and ingenuity have combined off-the-shelf, low-cost equipment with sophisticated computer vision, robotics and geomatic engineering. The results are cm-level resolution and accuracy products that can be generated even with cameras costing a few-hundred euros. In this review article, following a brief historic background and regulatory status analysis, we review the recent unmanned aircraft, sensing, navigation, orientation and general data processing developments for UAS photogrammetry and remote sensing with emphasis on the nano-micro-mini UAS segment.
Method of mobile robot indoor navigation by artificial landmarks with use of computer vision
NASA Astrophysics Data System (ADS)
Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.
2018-05-01
The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.
Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search
Song, Kai; Liu, Qi; Wang, Qi
2011-01-01
Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401
Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1989-09-01
The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.
Experiments on robot-assisted navigated drilling and milling of bones for pedicle screw placement.
Ortmaier, T; Weiss, H; Döbele, S; Schreiber, U
2006-12-01
This article presents experimental results for robot-assisted navigated drilling and milling for pedicle screw placement. The preliminary study was carried out in order to gain first insights into positioning accuracies and machining forces during hands-on robotic spine surgery. Additionally, the results formed the basis for the development of a new robot for surgery. A simplified anatomical model is used to derive the accuracy requirements. The experimental set-up consists of a navigation system and an impedance-controlled light-weight robot holding the surgical instrument. The navigation system is used to position the surgical instrument and to compensate for pose errors during machining. Holes are drilled in artificial bone and bovine spine. A quantitative comparison of the drill-hole diameters was achieved using a computer. The interaction forces and pose errors are discussed with respect to the chosen machining technology and control parameters. Within the technological boundaries of the experimental set-up, it is shown that the accuracy requirements can be met and that milling is superior to drilling. It is expected that robot assisted navigated surgery helps to improve the reliability of surgical procedures. Further experiments are necessary to take the whole workflow into account. Copyright 2006 John Wiley & Sons, Ltd.
Robonaut Mobile Autonomy: Initial Experiments
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.
2006-01-01
A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Navigating a Mobile Robot Across Terrain Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Howard, Ayanna; Bon, Bruce
2003-01-01
A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.
Structured Kernel Subspace Learning for Autonomous Robot Navigation.
Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai
2018-02-14
This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.
On Navigation Sensor Error Correction
NASA Astrophysics Data System (ADS)
Larin, V. B.
2016-01-01
The navigation problem for the simplest wheeled robotic vehicle is solved by just measuring kinematical parameters, doing without accelerometers and angular-rate sensors. It is supposed that the steerable-wheel angle sensor has a bias that must be corrected. The navigation parameters are corrected using the GPS. The approach proposed regards the wheeled robot as a system with nonholonomic constraints. The performance of such a navigation system is demonstrated by way of an example
A Fully Sensorized Cooperative Robotic System for Surgical Interventions
Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.
2012-01-01
In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551
Development of autonomous grasping and navigating robot
NASA Astrophysics Data System (ADS)
Kudoh, Hiroyuki; Fujimoto, Keisuke; Nakayama, Yasuichi
2015-01-01
The ability to find and grasp target items in an unknown environment is important for working robots. We developed an autonomous navigating and grasping robot. The operations are locating a requested item, moving to where the item is placed, finding the item on a shelf or table, and picking the item up from the shelf or the table. To achieve these operations, we designed the robot with three functions: an autonomous navigating function that generates a map and a route in an unknown environment, an item position recognizing function, and a grasping function. We tested this robot in an unknown environment. It achieved a series of operations: moving to a destination, recognizing the positions of items on a shelf, picking up an item, placing it on a cart with its hand, and returning to the starting location. The results of this experiment show the applicability of reducing the workforce with robots.
Two modular neuro-fuzzy system for mobile robot navigation
NASA Astrophysics Data System (ADS)
Bobyr, M. V.; Titov, V. S.; Kulabukhov, S. A.; Syryamkin, V. I.
2018-05-01
The article considers the fuzzy model for navigation of a mobile robot operating in two modes. In the first mode the mobile robot moves along a line. In the second mode, the mobile robot looks for an target in unknown space. Structural and schematic circuit of four-wheels mobile robot are presented in the article. The article describes the movement of a mobile robot based on two modular neuro-fuzzy system. The algorithm of neuro-fuzzy inference used in two modular control system for movement of a mobile robot is given in the article. The experimental model of the mobile robot and the simulation of the neuro-fuzzy algorithm used for its control are presented in the article.
Insect-Inspired Optical-Flow Navigation Sensors
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven
2005-01-01
Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.
Remote magnetic navigation to map and ablate left coronary cusp ventricular tachycardia.
Burkhardt, J David; Saliba, Walid I; Schweikert, Robert A; Cummings, Jennifer; Natale, Andrea
2006-10-01
Premature ventricular contractions (PVCs) and ventricular tachycardia may arise from the coronary cusps. Navigation, mapping, and ablation in the coronary cusps can be challenging. Remote magnetic navigation may offer an alternative to conventional manually operated catheters. We report a case of left coronary cusp ventricular tachycardia ablation using remote magnetic navigation. Right ventricular outflow tract and coronary cusp mapping, and ablation of the left coronary cusp using a remote magnetic navigation and three-dimensional (3-D) mapping system was performed in a 28-year-old male with frequent, symptomatic PVCs and ventricular tachycardia. Successful ablation of left coronary cusp ventricular tachycardia was performed using remote magnetic navigation. Remote magnetic navigation may be used to map and ablate PVCs and ventricular tachycardia originating from the coronary cusps.
Experiments in autonomous robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamel, W.R.
1987-01-01
The Center for Engineering Systems Advanced Research (CESAR) is performing basic research in autonomous robotics for energy-related applications in hazardous environments. The CESAR research agenda includes a strong experimental component to assure practical evaluation of new concepts and theories. An evolutionary sequence of mobile research robots has been planned to support research in robot navigation, world sensing, and object manipulation. A number of experiments have been performed in studying robot navigation and path planning with planar sonar sensing. Future experiments will address more complex tasks involving three-dimensional sensing, dexterous manipulation, and human-scale operations.
Small Body Exploration Technologies as Precursors for Interstellar Robotics
NASA Astrophysics Data System (ADS)
Noble, R. J.; Sykes, M. V.
The scientific activities undertaken to explore our Solar System will be very similar to those required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution, as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of the technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.
Small Body Exploration Technologies as Precursors for Interstellar Robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Robert; /SLAC; Sykes, Mark V.
The scientific activities undertaken to explore our Solar System will be the same as required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of themore » technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.« less
A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring
Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio
2016-01-01
This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505
Mini AERCam: A Free-Flying Robot for Space Inspection
NASA Technical Reports Server (NTRS)
Fredrickson, Steven
2001-01-01
The NASA Johnson Space Center Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a free-flying camera system for remote viewing and inspection of human spacecraft. The AERCam project team is currently developing a miniaturized version of AERCam known as Mini AERCam, a spherical nanosatellite 7.5 inches in diameter. Mini AERCam development builds on the success of AERCam Sprint, a 1997 Space Shuttle flight experiment, by integrating new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving these productivity-enhancing capabilities in a smaller package depends on aggressive component miniaturization. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion, rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for laboratory demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides on-orbit views of the Space Shuttle and International Space Station unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by space-walking crewmembers.
Memristive device based learning for navigation in robots.
Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A
2017-11-08
Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.
Miniature Robotic Spacecraft for Inspecting Other Spacecraft
NASA Technical Reports Server (NTRS)
Fredrickson, Steven; Abbott, Larry; Duran, Steve; Goode, Robert; Howard, Nathan; Jochim, David; Rickman, Steve; Straube, Tim; Studak, Bill; Wagenknecht, Jennifer;
2004-01-01
A report discusses the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam)-- a compact robotic spacecraft intended to be released from a larger spacecraft for exterior visual inspection of the larger spacecraft. The Mini AERCam is a successor to the AERCam Sprint -- a prior miniature robotic inspection spacecraft that was demonstrated in a space-shuttle flight experiment in 1997. The prototype of the Mini AERCam is a demonstration unit having approximately the form and function of a flight system. The Mini AERCam is approximately spherical with a diameter of about 7.5 in. (.19 cm) and a weight of about 10 lb (.4.5 kg), yet it has significant additional capabilities, relative to the 14-in. (36-cm), 35-lb (16-kg) AERCam Sprint. The Mini AERCam includes miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including two digital video cameras and a high-resolution still camera. The Mini AERCam is designed for either remote piloting or supervised autonomous operations, including station keeping and point-to-point maneuvering. The prototype has been tested on an air-bearing table and in a hardware-in-the-loop orbital simulation of the dynamics of maneuvering in proximity to the International Space Station.
NASA Technical Reports Server (NTRS)
Tunstel, E.; Howard, A.; Edwards, D.; Carlson, A.
2001-01-01
This paper presents a technique for learning to assess terrain traversability for outdoor mobile robot navigation using human-embedded logic and real-time perception of terrain features extracted from image data.
An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry
NASA Astrophysics Data System (ADS)
Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro
This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.
Motion Trajectories for Wide-area Surveying with a Rover-based Distributed Spectrometer
NASA Technical Reports Server (NTRS)
Tunstel, Edward; Anderson, Gary; Wilson, Edmond
2006-01-01
A mobile ground survey application that employs remote sensing as a primary means of area coverage is highlighted. It is distinguished from mobile robotic area coverage problems that employ contact or proximity-based sensing. The focus is on a specific concept for performing mobile surveys in search of biogenic gases on planetary surfaces using a distributed spectrometer -- a rover-based instrument designed for wide measurement coverage of promising search areas. Navigation algorithms for executing circular and spiral survey trajectories are presented for widearea distributed spectroscopy and evaluated based on area covered and distance traveled.
A biologically inspired meta-control navigation system for the Psikharpax rat robot.
Caluwaerts, K; Staffa, M; N'Guyen, S; Grand, C; Dollé, L; Favre-Félix, A; Girard, B; Khamassi, M
2012-06-01
A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e.g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics.
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
Modular Countermine Payload for Small Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman Herman; Doug Few; Roelof Versteeg
2010-04-01
Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processormore » that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multi-mission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.« less
Modular countermine payload for small robots
NASA Astrophysics Data System (ADS)
Herman, Herman; Few, Doug; Versteeg, Roelof; Valois, Jean-Sebastien; McMahill, Jeff; Licitra, Michael; Henciak, Edward
2010-04-01
Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processor that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multimission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.
Song, Shuang; Zhang, Changchun; Liu, Li; Meng, Max Q-H
2018-02-01
Flexible surgical robot can work in confined and complex environments, which makes it a good option for minimally invasive surgery. In order to utilize flexible manipulators in complicated and constrained surgical environments, it is of great significance to monitor the position and shape of the curvilinear manipulator in real time during the procedures. In this paper, we propose a magnetic tracking-based planar shape sensing and navigation system for flexible surgical robots in the transoral surgery. The system can provide the real-time tip position and shape information of the robot during the operation. We use wire-driven flexible robot to serve as the manipulator. It has three degrees of freedom. A permanent magnet is mounted at the distal end of the robot. Its magnetic field can be sensed with a magnetic sensor array. Therefore, position and orientation of the tip can be estimated utilizing a tracking method. A shape sensing algorithm is then carried out to estimate the real-time shape based on the tip pose. With the tip pose and shape display in the 3D reconstructed CT model, navigation can be achieved. Using the proposed system, we carried out planar navigation experiments on a skull phantom to touch three different target positions under the navigation of the skull display interface. During the experiments, the real-time shape has been well monitored and distance errors between the robot tip and the targets in the skull have been recorded. The mean navigation error is [Formula: see text] mm, while the maximum error is 3.2 mm. The proposed method provides the advantages that no sensors are needed to mount on the robot and no line-of-sight problem. Experimental results verified the feasibility of the proposed method.
Electromagnetic navigational bronchoscopy and robotic-assisted thoracic surgery.
Christie, Sara
2014-06-01
With the use of electromagnetic navigational bronchoscopy and robotics, lung lesions can be diagnosed and resected during one surgical procedure. Global positioning system technology allows surgeons to identify and mark a thoracic tumor, and then robotics technology allows them to perform minimally invasive resection and cancer staging procedures. Nurses on the perioperative robotics team must consider the logistics of providing safe and competent care when performing combined procedures during one surgical encounter. Instrumentation, OR organization and room setup, and patient positioning are important factors to consider to complete the procedure systematically and efficiently. This revolutionary concept of combining navigational bronchoscopy with robotics requires a team of dedicated nurses to facilitate the sequence of events essential for providing optimal patient outcomes in highly advanced surgical procedures. Copyright © 2014 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Embedded mobile farm robot for identification of diseased plants
NASA Astrophysics Data System (ADS)
Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh
2013-07-01
This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.
Remote Learning for the Manipulation and Control of Robotic Cells
ERIC Educational Resources Information Center
Goldstain, Ofir; Ben-Gal, Irad; Bukchin, Yossi
2007-01-01
This work proposes an approach to remote learning of robotic cells based on internet and simulation tools. The proposed approach, which integrates remote-learning and tele-operation into a generic scheme, is designed to enable students and developers to set-up and manipulate a robotic cell remotely. Its implementation is based on a dedicated…
Mamdani Fuzzy System for Indoor Autonomous Mobile Robot
NASA Astrophysics Data System (ADS)
Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.
2011-06-01
Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.
INL Autonomous Navigation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.
Minimally invasive abdominal surgery: lux et veritas past, present, and future.
Harrell, Andrew G; Heniford, B Todd
2005-08-01
Laparoscopic surgery has developed out of multiple technology innovations and the desire to see beyond the confines of the human body. As the instrumentation became more advanced, the application of this technique followed. By revisiting the historical developments that now define laparoscopic surgery, we can possibly foresee its future. A Medline search was performed of all the English-language literature. Further references were obtained through cross-referencing the bibliography cited in each work and using books from the authors' collection. Minimally invasive surgery is becoming important in almost every facet of abdominal surgery. Optical improvements, miniaturization, and robotic technology continue to define the frontier of minimally invasive surgery. Endoluminal resection surgery, image-guided surgical navigation, and remotely controlled robotics are not far from becoming reality. These and advances yet to be described will change laparoscopic surgery just as the electric light bulb did over 100 years ago.
Survivability design for a hybrid underwater vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Biao; Wu, Chao; Li, Xiang
A novel hybrid underwater robotic vehicle (HROV) capable of working to the full ocean depth has been developed. The battery powered vehicle operates in two modes: operate as an untethered autonomous vehicle in autonomous underwater vehicle (AUV) mode and operate under remote control connected to the surface vessel by a lightweight, fiber optic tether in remotely operated vehicle (ROV) mode. Considering the hazardous underwater environment at the limiting depth and the hybrid operating modes, survivability has been placed on an equal level with the other design attributes of the HROV since the beginning of the project. This paper reports themore » survivability design elements for the HROV including basic vehicle design of integrated navigation and integrated communication, emergency recovery strategy, distributed architecture, redundant bus, dual battery package, emergency jettison system and self-repairing control system.« less
Navigable points estimation for mobile robots using binary image skeletonization
NASA Astrophysics Data System (ADS)
Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman
2017-02-01
This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.
NASA Technical Reports Server (NTRS)
Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.
2012-01-01
A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, N.S.V.; Kareti, S.; Shi, Weimin
A formal framework for navigating a robot in a geometric terrain by an unknown set of obstacles is considered. Here the terrain model is not a priori known, but the robot is equipped with a sensor system (vision or touch) employed for the purpose of navigation. The focus is restricted to the non-heuristic algorithms which can be theoretically shown to be correct within a given framework of models for the robot, terrain and sensor system. These formulations, although abstract and simplified compared to real-life scenarios, provide foundations for practical systems by highlighting the underlying critical issues. First, the authors considermore » the algorithms that are shown to navigate correctly without much consideration given to the performance parameters such as distance traversed, etc. Second, they consider non-heuristic algorithms that guarantee bounds on the distance traversed or the ratio of the distance traversed to the shortest path length (computed if the terrain model is known). Then they consider the navigation of robots with very limited computational capabilities such as finite automata, etc.« less
Navigation of robotic system using cricket motes
NASA Astrophysics Data System (ADS)
Patil, Yogendra J.; Baine, Nicholas A.; Rattan, Kuldip S.
2011-06-01
This paper presents a novel algorithm for self-mapping of the cricket motes that can be used for indoor navigation of autonomous robotic systems. The cricket system is a wireless sensor network that can provide indoor localization service to its user via acoustic ranging techniques. The behavior of the ultrasonic transducer on the cricket mote is studied and the regions where satisfactorily distance measurements can be obtained are recorded. Placing the motes in these regions results fine-grain mapping of the cricket motes. Trilateration is used to obtain a rigid coordinate system, but is insufficient if the network is to be used for navigation. A modified SLAM algorithm is applied to overcome the shortcomings of trilateration. Finally, the self-mapped cricket motes can be used for navigation of autonomous robotic systems in an indoor location.
An Analysis of Navigation Algorithms for Smartphones Using J2ME
NASA Astrophysics Data System (ADS)
Santos, André C.; Tarrataca, Luís; Cardoso, João M. P.
Embedded systems are considered one of the most potential areas for future innovations. Two embedded fields that will most certainly take a primary role in future innovations are mobile robotics and mobile computing. Mobile robots and smartphones are growing in number and functionalities, becoming a presence in our daily life. In this paper, we study the current feasibility of a smartphone to execute navigation algorithms. As a test case, we use a smartphone to control an autonomous mobile robot. We tested three navigation problems: Mapping, Localization and Path Planning. For each of these problems, an algorithm has been chosen, developed in J2ME, and tested on the field. Results show the current mobile Java capacity for executing computationally demanding algorithms and reveal the real possibility of using smartphones for autonomous navigation.
Cloud-based robot remote control system for smart factory
NASA Astrophysics Data System (ADS)
Wu, Zhiming; Li, Lianzhong; Xu, Yang; Zhai, Jingmei
2015-12-01
With the development of internet technologies and the wide application of robots, there is a prospect (trend/tendency) of integration between network and robots. A cloud-based robot remote control system over networks for smart factory is proposed, which enables remote users to control robots and then realize intelligent production. To achieve it, a three-layer system architecture is designed including user layer, service layer and physical layer. Remote control applications running on the cloud server is developed on Microsoft Azure. Moreover, DIV+ CSS technologies are used to design human-machine interface to lower maintenance cost and improve development efficiency. Finally, an experiment is implemented to verify the feasibility of the program.
A remote assessment system with a vision robot and wearable sensors.
Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun
2004-01-01
This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.
Navigation of a care and welfare robot
NASA Astrophysics Data System (ADS)
Yukawa, Toshihiro; Hosoya, Osamu; Saito, Naoki; Okano, Hideharu
2005-12-01
In this paper, we propose the development of a robot that can perform nursing tasks in a hospital. In a narrow environment such as a sickroom or a hallway, the robot must be able to move freely in arbitrary directions. Therefore, the robot needs to have high controllability and the capability to make precise movements. Our robot can recognize a line by using cameras, and can be controlled in the reference directions by means of comparison with original cell map information; furthermore, it moves safely on the basis of an original center-line established permanently in the building. Correspondence between the robot and a centralized control center enables the robot's autonomous movement in the hospital. Through a navigation system using cell map information, the robot is able to perform nursing tasks smoothly by changing the camera angle.
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images †
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-01-01
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications. PMID:28604624
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-06-12
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.
Telepresence in neurosurgery: the integrated remote neurosurgical system.
Kassell, N F; Downs, J H; Graves, B S
1997-01-01
This paper describes the Integrated Remote Neurosurgical System (IRNS), a remotely-operated neurosurgical microscope with high-speed communications and a surgeon-accessible user interface. The IRNS will allow high quality bidirectional mentoring in the neurosurgical suite. The research goals of this effort are twofold: to develop a clinical system allowing a remote neurosurgeon to lend expertise to the OR-based neurosurgical team and to provide an integrated training environment. The IRNS incorporates a generic microscope/transport model, Called SuMIT (Surgical Manipulator Interface Translator). Our system is currently under test using the Zeiss MKM surgical transport. A SuMIT interface is also being constructed for the Robotics Research 1607. The IRNS Remote Planning and Navigation Workstation incorporates surgical planning capabilities, real-time, 30 fps video from the microscope and overhead video camera. The remote workstation includes a force reflecting handcontroller which gives the remote surgeon an intuitive way to position the microscope head. Bidirectional audio, video whiteboarding, and image archiving are also supported by the remote workstation. A simulation mode permits pre-surgical simulation, post-surgical critique, and training for surgeons without access to an actual microscope transport system. The components of the IRNS are integrated using ATM switching to provide low latency data transfer. The research, along with the more sophisticated systems that will follow, will serve as a foundation and test-bed for extending the surgeon's skills without regard to time zone or geographic boundaries.
Dynamic multisensor fusion for mobile robot navigation in an indoor environment
NASA Astrophysics Data System (ADS)
Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.
2001-10-01
In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.
NASA Astrophysics Data System (ADS)
Hsu, Roy CHaoming; Jian, Jhih-Wei; Lin, Chih-Chuan; Lai, Chien-Hung; Liu, Cheng-Ting
2013-01-01
The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.
Numerical evaluation of mobile robot navigation in static indoor environment via EGAOR Iteration
NASA Astrophysics Data System (ADS)
Dahalan, A. A.; Saudi, A.; Sulaiman, J.; Din, W. R. W.
2017-09-01
One of the key issues in mobile robot navigation is the ability for the robot to move from an arbitrary start location to a specified goal location without colliding with any obstacles while traveling, also known as mobile robot path planning problem. In this paper, however, we examined the performance of a robust searching algorithm that relies on the use of harmonic potentials of the environment to generate smooth and safe path for mobile robot navigation in a static known indoor environment. The harmonic potentials will be discretized by using Laplacian’s operator to form a system of algebraic approximation equations. This algebraic linear system will be computed via 4-Point Explicit Group Accelerated Over-Relaxation (4-EGAOR) iterative method for rapid computation. The performance of the proposed algorithm will then be compared and analyzed against the existing algorithms in terms of number of iterations and execution time. The result shows that the proposed algorithm performed better than the existing methods.
Neural Network Based Sensory Fusion for Landmark Detection
NASA Technical Reports Server (NTRS)
Kumbla, Kishan -K.; Akbarzadeh, Mohammad R.
1997-01-01
NASA is planning to send numerous unmanned planetary missions to explore the space. This requires autonomous robotic vehicles which can navigate in an unstructured, unknown, and uncertain environment. Landmark based navigation is a new area of research which differs from the traditional goal-oriented navigation, where a mobile robot starts from an initial point and reaches a destination in accordance with a pre-planned path. The landmark based navigation has the advantage of allowing the robot to find its way without communication with the mission control station and without exact knowledge of its coordinates. Current algorithms based on landmark navigation however pose several constraints. First, they require large memories to store the images. Second, the task of comparing the images using traditional methods is computationally intensive and consequently real-time implementation is difficult. The method proposed here consists of three stages, First stage utilizes a heuristic-based algorithm to identify significant objects. The second stage utilizes a neural network (NN) to efficiently classify images of the identified objects. The third stage combines distance information with the classification results of neural networks for efficient and intelligent navigation.
Assessment of Navigation Using a Hybrid Cognitive/Metric World Model
2015-01-01
The robot failed to avoid the stairs of the church. Table A-26 Assessment of vignette 1, path 6b, by researcher TBS Navigate left of the...NOTES 14. ABSTRACT One goal of the US Army Research Laboratory’s Robotic Collaborative Technology Alliance is to develop a cognitive architecture...that would allow a robot to operate on both the semantic and metric levels. As such, both symbolic and metric information would be interpreted within
DEMONSTRATION OF AUTONOMOUS AIR MONITORING THROUGH ROBOTICS
This project included modifying an existing teleoperated robot to include autonomous navigation, large object avoidance, and air monitoring and demonstrating that prototype robot system in indoor and outdoor environments. An existing teleoperated "Surveyor" robot developed by ARD...
Object Detection Techniques Applied on Mobile Robot Semantic Navigation
Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto
2014-01-01
The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency. PMID:24732101
Smith, James Andrew; Jivraj, Jamil; Wong, Ronnie; Yang, Victor
2016-04-01
This review provides an examination of contemporary neurosurgical robots and the developments that led to them. Improvements in localization, microsurgery and minimally invasive surgery have made robotic neurosurgery viable, as seen by the success of platforms such as the CyberKnife and neuromate. Neurosurgical robots can now perform specific surgical tasks such as skull-base drilling and craniotomies, as well as pedicle screw and cochlear electrode insertions. Growth trends in neurosurgical robotics are likely to continue but may be tempered by concerns over recent surgical robot recalls, commercially-driven surgeon training, and studies that show operational costs for surgical robotic procedures are often higher than traditional surgical methods. We point out that addressing performance issues related to navigation-related registration is an active area of research and will aid in improving overall robot neurosurgery performance and associated costs.
On learning navigation behaviors for small mobile robots with reservoir computing architectures.
Antonelo, Eric Aislan; Schrauwen, Benjamin
2015-04-01
This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal.
A Behavior-Based Strategy for Single and Multi-Robot Autonomous Exploration
Cepeda, Jesus S.; Chaimowicz, Luiz; Soto, Rogelio; Gordillo, José L.; Alanís-Reyes, Edén A.; Carrillo-Arce, Luis C.
2012-01-01
In this paper, we consider the problem of autonomous exploration of unknown environments with single and multiple robots. This is a challenging task, with several potential applications. We propose a simple yet effective approach that combines a behavior-based navigation with an efficient data structure to store previously visited regions. This allows robots to safely navigate, disperse and efficiently explore the environment. A series of experiments performed using a realistic robotic simulator and a real testbed scenario demonstrate that our technique effectively distributes the robots over the environment and allows them to quickly accomplish their mission in large open spaces, narrow cluttered environments, dead-end corridors, as well as rooms with minimum exits.
High-frequency imaging radar for robotic navigation and situational awareness
NASA Astrophysics Data System (ADS)
Thomas, David J.; Luo, Changan; Knox, Robert
2011-05-01
With increasingly available high frequency radar components, the practicality of imaging radar for mobile robotic applications is now practical. Navigation, ODOA, situational awareness and safety applications can be supported in small light weight packaging. Radar has the additional advantage of being able sense through aerosols, smoke and dust that can be difficult for many optical systems. The ability to directly measure the range rate of an object is also an advantage in radar applications. This paper will explore the applicability of high frequency imaging radar for mobile robotics and examine a W-band 360 degree imaging radar prototype. Indoor and outdoor performance data will be analyzed and evaluated for applicability to navigation and situational awareness.
IntelliTable: Inclusively-Designed Furniture with Robotic Capabilities.
Prescott, Tony J; Conran, Sebastian; Mitchinson, Ben; Cudd, Peter
2017-01-01
IntelliTable is a new proof-of-principle assistive technology system with robotic capabilities in the form of an elegant universal cantilever table able to move around by itself, or under user control. We describe the design and current capabilities of the table and the human-centered design methodology used in its development and initial evaluation. The IntelliTable study has delivered robotic platform programmed by a smartphone that can navigate around a typical home or care environment, avoiding obstacles, and positioning itself at the user's command. It can also be configured to navigate itself to pre-ordained places positions within an environment using ceiling tracking, responsive optical guidance and object-based sonar navigation.
Telepresence system development for application to the control of remote robotic systems
NASA Technical Reports Server (NTRS)
Crane, Carl D., III; Duffy, Joseph; Vora, Rajul; Chiang, Shih-Chien
1989-01-01
The recent developments of techniques which assist an operator in the control of remote robotic systems are described. In particular, applications are aimed at two specific scenarios: The control of remote robot manipulators; and motion planning for remote transporter vehicles. Common to both applications is the use of realistic computer graphics images which provide the operator with pertinent information. The specific system developments for several recently completed and ongoing telepresence research projects are described.
Multiple-Agent Air/Ground Autonomous Exploration Systems
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.
2007-01-01
Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.
A Human Factors Analysis of Proactive Support in Human-Robot Teaming
2015-09-28
teammate is remotely controlling a robot while working with an intelligent robot teammate ‘Mary’. Our main result shows that the subjects generally...IEEE/RSJ Intl. Conference on Intelligent Robots and Systems Conference Date: September 28, 2015 A Human Factors Analysis of Proactive Support in Human...human teammate is remotely controlling a robot while working with an intelligent robot teammate ‘Mary’. Our main result shows that the subjects
NASA Technical Reports Server (NTRS)
Cepollina, Frank J. (Inventor); Corbo, James E. (Inventor); Burns, Richard D. (Inventor); Jedhrich, Nicholas M. (Inventor); Holz, Jill M. (Inventor)
2009-01-01
This invention is a method and supporting apparatus for autonomously capturing, servicing and de-orbiting a free-flying spacecraft, such as a satellite, using robotics. The capture of the spacecraft includes the steps of optically seeking and ranging the satellite using LIDAR, and matching tumble rates, rendezvousing and berthing with the satellite. Servicing of the spacecraft may be done using supervised autonomy, which is allowing a robot to execute a sequence of instructions without intervention from a remote human-occupied location. These instructions may be packaged at the remote station in a script and uplinked to the robot for execution upon remote command giving authority to proceed. Alternately, the instructions may be generated by Artificial Intelligence (AI) logic onboard the robot. In either case, the remote operator maintains the ability to abort an instruction or script at any time as well as the ability to intervene using manual override to teleoperate the robot.
BatSLAM: Simultaneous localization and mapping using biomimetic sonar.
Steckel, Jan; Peremans, Herbert
2013-01-01
We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building.
BatSLAM: Simultaneous Localization and Mapping Using Biomimetic Sonar
Steckel, Jan; Peremans, Herbert
2013-01-01
We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building. PMID:23365647
Tandem robot control system and method for controlling mobile robots in tandem
Hayward, David R.; Buttz, James H.; Shirey, David L.
2002-01-01
A control system for controlling mobile robots provides a way to control mobile robots, connected in tandem with coupling devices, to navigate across difficult terrain or in closed spaces. The mobile robots can be controlled cooperatively as a coupled system in linked mode or controlled individually as separate robots.
A Search-and-Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment
Gao, Junyao; Zhao, Fangzhou; Liu, Yi
2017-01-01
This paper introduces a search-and-rescue robot system used for remote sensing of the underground coal mine environment, which is composed of an operating control unit and two mobile robots with explosion-proof and waterproof function. This robot system is designed to observe and collect information of the coal mine environment through remote control. Thus, this system can be regarded as a multifunction sensor, which realizes remote sensing. When the robot system detects danger, it will send out signals to warn rescuers to keep away. The robot consists of two gas sensors, two cameras, a two-way audio, a 1 km-long fiber-optic cable for communication and a mechanical explosion-proof manipulator. Especially, the manipulator is a novel explosion-proof manipulator for cleaning obstacles, which has 3-degree-of-freedom, but is driven by two motors. Furthermore, the two robots can communicate in series for 2 km with the operating control unit. The development of the robot system may provide a reference for developing future search-and-rescue systems. PMID:29065560
Visual Navigation Constructing and Utilizing Simple Maps of an Indoor Environment
1989-03-01
places are con- nected to eachother , so that the robot may plan routes. On a more advanced level. navigation nmay require an understanding of the meaning...two vertical lines, suitably separated from eachother . through which it tries to lead the robot. CHAPTER 1. L’TRODUCTION 14 1.4 Context of the Project...the observer will have no trouble in determining where the wall is. A robot, with far less processing power than humans have. might be able determine
Conference on Space and Military Applications of Automation and Robotics
NASA Technical Reports Server (NTRS)
1988-01-01
Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.
Autonomous assistance navigation for robotic wheelchairs in confined spaces.
Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F
2010-01-01
In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.
Dynamic analysis of space robot remote control system
NASA Astrophysics Data System (ADS)
Kulakov, Felix; Alferov, Gennady; Sokolov, Boris; Gorovenko, Polina; Sharlay, Artem
2018-05-01
The article presents analysis on construction of two-stage remote control for space robots. This control ensures efficiency of the robot control system at large delays in transmission of control signals from the ground control center to the local control system of the space robot. The conditions for control stability of and high transparency are found.
Non-destructive inspection in industrial equipment using robotic mobile manipulation
NASA Astrophysics Data System (ADS)
Maurtua, Iñaki; Susperregi, Loreto; Ansuategui, Ander; Fernández, Ane; Ibarguren, Aitor; Molina, Jorge; Tubio, Carlos; Villasante, Cristobal; Felsch, Torsten; Pérez, Carmen; Rodriguez, Jorge R.; Ghrissi, Meftah
2016-05-01
MAINBOT project has developed service robots based applications to autonomously execute inspection tasks in extensive industrial plants in equipment that is arranged horizontally (using ground robots) or vertically (climbing robots). The industrial objective has been to provide a means to help measuring several physical parameters in multiple points by autonomous robots, able to navigate and climb structures, handling non-destructive testing sensors. MAINBOT has validated the solutions in two solar thermal plants (cylindrical-parabolic collectors and central tower), that are very demanding from mobile manipulation point of view mainly due to the extension (e.g. a thermal solar plant of 50Mw, with 400 hectares, 400.000 mirrors, 180 km of absorber tubes, 140m height tower), the variability of conditions (outdoor, day-night), safety requirements, etc. Once the technology was validated in simulation, the system was deployed in real setups and different validation tests carried out. In this paper two of the achievements related with the ground mobile inspection system are presented: (1) Autonomous navigation localization and planning algorithms to manage navigation in huge extensions and (2) Non-Destructive Inspection operations: thermography based detection algorithms to provide automatic inspection abilities to the robots.
Schwein, Adeline; Kramer, Benjamin; Chinnadurai, Ponraj; Virmani, Neha; Walker, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean
2018-04-01
Combining three-dimensional (3D) catheter control with electromagnetic (EM) tracking-based navigation significantly reduced fluoroscopy time and improved robotic catheter movement quality in a previous in vitro pilot study. The aim of this study was to expound on previous results and to expand the value of EM tracking with a novel feature, assistednavigation, allowing automatic catheter orientation and semiautomatic vessel cannulation. Eighteen users navigated a robotic catheter in an aortic aneurysm phantom using an EM guidewire and a modified 9F robotic catheter with EM sensors at the tip of both leader and sheath. All users cannulated two targets, the left renal artery and posterior gate, using four visualization modes: (1) Standard fluoroscopy (control). (2) 2D biplane fluoroscopy showing real-time virtual catheter localization and orientation from EM tracking. (3) 2D biplane fluoroscopy with novel EM assisted navigation allowing the user to define the target vessel. The robotic catheter orients itself automatically toward the target; the user then only needs to advance the guidewire following this predefined optimized path to catheterize the vessel. Then, while advancing the catheter over the wire, the assisted navigation automatically modifies catheter bending and rotation in order to ensure smooth progression, avoiding loss of wire access. (4) Virtual 3D representation of the phantom showing real-time virtual catheter localization and orientation. Standard fluoroscopy was always available; cannulation and fluoroscopy times were noted for every mode and target cannulation. Quality of catheter movement was assessed by measuring the number of submovements of the catheter using the 3D coordinates of the EM sensors. A t-test was used to compare the standard fluoroscopy mode against EM tracking modes. EM tracking significantly reduced the mean fluoroscopy time (P < .001) and the number of submovements (P < .02) for both cannulation tasks. For the posterior gate, mean cannulation time was also significantly reduced when using EM tracking (P < .001). The use of novel EM assisted navigation feature (mode 3) showed further reduced cannulation time for the posterior gate (P = .002) and improved quality of catheter movement for the left renal artery cannulation (P = .021). These results confirmed the findings of a prior study that highlighted the value of combining 3D robotic catheter control and 3D navigation to improve safety and efficiency of endovascular procedures. The novel EM assisted navigation feature augments the robotic master/slave concept with automated catheter orientation toward the target and shows promising results in reducing procedure time and improving catheter motion quality. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Bilateral Impedance Control For Telemanipulators
NASA Technical Reports Server (NTRS)
Moore, Christopher L.
1993-01-01
Telemanipulator system includes master robot manipulated by human operator, and slave robot performing tasks at remote location. Two robots electronically coupled so slave robot moves in response to commands from master robot. Teleoperation greatly enhanced if forces acting on slave robot fed back to operator, giving operator feeling he or she manipulates remote environment directly. Main advantage of bilateral impedance control: enables arbitrary specification of desired performance characteristics for telemanipulator system. Relationship between force and position modulated at both ends of system to suit requirements of task.
[Principles of MR-guided interventions, surgery, navigation, and robotics].
Melzer, A
2010-08-01
The application of magnetic resonance imaging (MRI) as an imaging technique in interventional and surgical techniques provides a new dimension of soft tissue-oriented precise procedures without exposure to ionizing radiation and nephrotoxic allergenic, iodine-containing contrast agents. The technical capabilities of MRI in combination with interventional devices and systems, navigation, and robotics are discussed.
Development of a force-reflecting robotic platform for cardiac catheter navigation.
Park, Jun Woo; Choi, Jaesoon; Pak, Hui-Nam; Song, Seung Joon; Lee, Jung Chan; Park, Yongdoo; Shin, Seung Min; Sun, Kyung
2010-11-01
Electrophysiological catheters are used for both diagnostics and clinical intervention. To facilitate more accurate and precise catheter navigation, robotic cardiac catheter navigation systems have been developed and commercialized. The authors have developed a novel force-reflecting robotic catheter navigation system. The system is a network-based master-slave configuration having a 3-degree of freedom robotic manipulator for operation with a conventional cardiac ablation catheter. The master manipulator implements a haptic user interface device with force feedback using a force or torque signal either measured with a sensor or estimated from the motor current signal in the slave manipulator. The slave manipulator is a robotic motion control platform on which the cardiac ablation catheter is mounted. The catheter motions-forward and backward movements, rolling, and catheter tip bending-are controlled by electromechanical actuators located in the slave manipulator. The control software runs on a real-time operating system-based workstation and implements the master/slave motion synchronization control of the robot system. The master/slave motion synchronization response was assessed with step, sinusoidal, and arbitrarily varying motion commands, and showed satisfactory performance with insignificant steady-state motion error. The current system successfully implemented the motion control function and will undergo safety and performance evaluation by means of animal experiments. Further studies on the force feedback control algorithm and on an active motion catheter with an embedded actuation mechanism are underway. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Motorization of a surgical microscope for intra-operative navigation and intuitive control.
Finke, M; Schweikard, A
2010-09-01
During surgical procedures, various medical systems, e.g. microscope or C-arm, are used. Their precise and repeatable manual positioning can be very cumbersome and interrupts the surgeon's work flow. Robotized systems can assist the surgeon but they require suitable kinematics and control. However, positioning must be fast, flexible and intuitive. We describe a fully motorized surgical microscope. Hardware components as well as implemented applications are specified. The kinematic equations are described and a novel control concept is proposed. Our microscope combines fast manual handling with accurate, automatic positioning. Intuitive control is provided by a small remote control mounted to one of the surgical instruments. Positioning accuracy and repeatability are < 1 mm and vibrations caused by automatic movements fade away in about 1 s. The robotic system assists the surgeon, so that he can position the microscope precisely and repeatedly without interrupting the clinical workflow. The combination of manual und automatic control guarantees fast and flexible positioning during surgical procedures. Copyright 2010 John Wiley & Sons, Ltd.
Schwein, Adeline; Kramer, Ben; Chinnadurai, Ponraj; Walker, Sean; O'Malley, Marcia; Lumsden, Alan; Bismuth, Jean
2017-02-01
One limitation of the use of robotic catheters is the lack of real-time three-dimensional (3D) localization and position updating: they are still navigated based on two-dimensional (2D) X-ray fluoroscopic projection images. Our goal was to evaluate whether incorporating an electromagnetic (EM) sensor on a robotic catheter tip could improve endovascular navigation. Six users were tasked to navigate using a robotic catheter with incorporated EM sensors in an aortic aneurysm phantom. All users cannulated two anatomic targets (left renal artery and posterior "gate") using four visualization modes: (1) standard fluoroscopy mode (control), (2) 2D fluoroscopy mode showing real-time virtual catheter orientation from EM tracking, (3) 3D model of the phantom with anteroposterior and endoluminal view, and (4) 3D model with anteroposterior and lateral view. Standard X-ray fluoroscopy was always available. Cannulation and fluoroscopy times were noted for every mode. 3D positions of the EM tip sensor were recorded at 4 Hz to establish kinematic metrics. The EM sensor-incorporated catheter navigated as expected according to all users. The success rate for cannulation was 100%. For the posterior gate target, mean cannulation times in minutes:seconds were 8:12, 4:19, 4:29, and 3:09, respectively, for modes 1, 2, 3 and 4 (P = .013), and mean fluoroscopy times were 274, 20, 29, and 2 seconds, respectively (P = .001). 3D path lengths, spectral arc length, root mean dimensionless jerk, and number of submovements were significantly improved when EM tracking was used (P < .05), showing higher quality of catheter movement with EM navigation. The EM tracked robotic catheter allowed better real-time 3D orientation, facilitating navigation, with a reduction in cannulation and fluoroscopy times and improvement of motion consistency and efficiency. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Supervisory autonomous local-remote control system design: Near-term and far-term applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul
1993-01-01
The JPL Supervisory Telerobotics Laboratory (STELER) has developed a unique local-remote robot control architecture which enables management of intermittent bus latencies and communication delays such as those expected for ground-remote operation of Space Station robotic systems via the TDRSS communication platform. At the local site, the operator updates the work site world model using stereo video feedback and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. The operator can then employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the object under any degree of time-delay. The remote site performs the closed loop force/torque control, task monitoring, and reflex action. This paper describes the STELER local-remote robot control system, and further describes the near-term planned Space Station applications, along with potential far-term applications such as telescience, autonomous docking, and Lunar/Mars rovers.
Evaluation of the ROSA™ Spine robot for minimally invasive surgical procedures.
Lefranc, M; Peltier, J
2016-10-01
The ROSA® robot (Medtech, Montpellier, France) is a new medical device designed to assist the surgeon during minimally invasive spine procedures. The device comprises a patient-side cart (bearing the robotic arm and a workstation) and an optical navigation camera. The ROSA® Spine robot enables accurate pedicle screw placement. Thanks to its robotic arm and navigation abilities, the robot monitors movements of the spine throughout the entire surgical procedure and thus enables accurate, safe arthrodesis for the treatment of degenerative lumbar disc diseases, exactly as planned by the surgeon. Development perspectives include (i) assistance at all levels of the spine, (ii) improved planning abilities (virtualization of the entire surgical procedure) and (iii) use for almost any percutaneous spinal procedures not limited in screw positioning such as percutaneous endoscopic lumbar discectomy, intracorporeal implant positioning, over te top laminectomy or radiofrequency ablation.
Trajectory and navigation system design for robotic and piloted missions to Mars
NASA Technical Reports Server (NTRS)
Thurman, S. W.; Matousek, S. E.
1991-01-01
Future Mars exploration missions, both robotic and piloted, may utilize Earth to Mars transfer trajectories that are significantly different from one another, depending upon the type of mission being flown and the time period during which the flight takes place. The use of new or emerging technologies for future missions to Mars, such as aerobraking and nuclear rocket propulsion, may yield navigation requirements that are much more stringent than those of past robotic missions, and are very difficult to meet for some trajectories. This article explores the interdependencies between the properties of direct Earth to Mars trajectories and the Mars approach navigation accuracy that can be achieved using different radio metric data types, such as ranging measurements between an approaching spacecraft and Mars orbiting relay satellites, or Earth based measurements such as coherent Doppler and very long baseline interferometry. The trajectory characteristics affecting navigation performance are identified, and the variations in accuracy that might be experienced over the range of different Mars approach trajectories are discussed. The results predict that three sigma periapsis altitude navigation uncertainties of 2 to 10 km can be achieved when a Mars orbiting satellite is used as a navigation aid.
Object recognition for autonomous robot utilizing distributed knowledge database
NASA Astrophysics Data System (ADS)
Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji
2003-10-01
In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.
3min. poster presentations of B01
NASA Astrophysics Data System (ADS)
Foing, Bernard H.
We give a report on recommendations from ILEWG International conferences held at Cape Canaveral in 2008 (ICEUM10), and in Beijing in May 2010 with IAF (GLUC -ICEUM11). We discuss the different rationale for Moon exploration. Priorities for scientific investigations include: clues on the formation and evolution of rocky planets, accretion and bombardment in the inner solar system, comparative planetology processes (tectonic, volcanic, impact cratering, volatile delivery), historical records, astrobiology, survival of organics; past, present and future life. The ILEWG technology task group set priorities for the advancement of instrumenta-tion: Remote sensing miniaturised instruments; Surface geophysical and geochemistry package; Instrument deployment and robotic arm, nano-rover, sampling, drilling; Sample finder and collector. Regional mobility rover; Autonomy and Navigation; Artificially intelligent robots, Complex systems. The ILEWG ExogeoLab pilot project was developed as support for instru-ments, landers, rovers,and preparation for cooperative robotic village. The ILEWG lunar base task group looked at minimal design concepts, technologies in robotic and human exploration with Tele control, telepresence, virtual reality; Man-Machine interface and performances. The ILEWG ExoHab pilot project has been started with support from agencies and partners. We discuss ILEWG terrestrial Moon-Mars campaigns for validation of technologies, research and human operations. We indicate how Moon-Mars Exploration can inspire solutions to global Earth sustained development: In-Situ Utilisation of resources; Establishment of permanent robotic infrastructures, Environmental protection aspects; Life sciences laboratories; Support to human exploration. Co-Authors: ILEWG Task Groups on: Science, Technology, Robotic village, Lunar Bases , Commercial and Societal aspects, Roadmap synergies with other programmes, Public en-gagemnet and Outreach, Young Lunar Explorers.
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.; Duran, Steve G.; Braun, Angela N.; Straube, Timothy M.; Mitchell, Jennifer D.
2006-01-01
The NASA Johnson Space Center has developed a nanosatellite-class Free Flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam Free Flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35-pound, 14-inch diameter AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, power, propulsion, and imaging subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations, including automatic stationkeeping, point-to-point maneuvering, and waypoint tracking. The Mini AERCam Free Flyer is accompanied by a sophisticated control station for command and control, as well as a docking system for automated deployment, docking, and recharge at a parent spacecraft. Free Flyer functional testing has been conducted successfully on both an airbearing table and in a six-degree-of-freedom closed-loop orbital simulation with avionics hardware in the loop. Mini AERCam aims to provide beneficial on-orbit views that cannot be obtained from fixed cameras, cameras on robotic manipulators, or cameras carried by crewmembers during extravehicular activities (EVA s). On Shuttle or International Space Station (ISS), for example, Mini AERCam could support external robotic operations by supplying orthogonal views to the intravehicular activity (IVA) robotic operator, supply views of EVA operations to IVA and/or ground crews monitoring the EVA, and carry out independent visual inspections of areas of interest around the spacecraft. To enable these future benefits with minimal impact on IVA operators and ground controllers, the Mini AERCam system architecture incorporates intelligent systems attributes that support various autonomous capabilities. 1) A robust command sequencer enables task-level command scripting. Command scripting is employed for operations such as automatic inspection scans over a region of interest, and operator-hands-off automated docking. 2) A system manager built on the same expert-system software as the command sequencer provides detection and smart-response capability for potential system-level anomalies, like loss of communications between the Free Flyer and control station. 3) An AERCam dynamics manager provides nominal and off-nominal management of guidance, navigation, and control (GN&C) functions. It is employed for safe trajectory monitoring, contingency maneuvering, and related roles. This paper will describe these architectural components of Mini AERCam autonomy, as well as the interaction of these elements with a human operator during supervised autonomous control.
Determining Locations by Use of Networks of Passive Beacons
NASA Technical Reports Server (NTRS)
Okino, Clayton; Gray, Andrew; Jennings, Esther
2009-01-01
Networks of passive radio beacons spanning moderate-sized terrain areas have been proposed to aid navigation of small robotic aircraft that would be used to explore Saturn s moon Titan. Such networks could also be used on Earth to aid navigation of robotic aircraft, land vehicles, or vessels engaged in exploration or reconnaissance in situations or locations (e.g., underwater locations) in which Global Positioning System (GPS) signals are unreliable or unavailable. Prior to use, it would be necessary to pre-position the beacons at known locations that would be determined by use of one or more precise independent global navigation system(s). Thereafter, while navigating over the area spanned by a given network of passive beacons, an exploratory robot would use the beacons to determine its position precisely relative to the known beacon positions (see figure). If it were necessary for the robot to explore multiple, separated terrain areas spanned by different networks of beacons, the robot could use a long-haul, relatively coarse global navigation system for the lower-precision position determination needed during transit between such areas. The proposed method of precise determination of position of an exploratory robot relative to the positions of passive radio beacons is based partly on the principles of radar and partly on the principles of radio-frequency identification (RFID) tags. The robot would transmit radar-like signals that would be modified and reflected by the passive beacons. The distance to each beacon would be determined from the roundtrip propagation time and/or round-trip phase shift of the signal returning from that beacon. Signals returned from different beacons could be distinguished by means of their RFID characteristics. Alternatively or in addition, the antenna of each beacon could be designed to radiate in a unique pattern that could be identified by the navigation system. Also, alternatively or in addition, sets of identical beacons could be deployed in unique configurations such that the navigation system could identify their unique combinations of radio-frequency reflections as an alternative to leveraging the uniqueness of the RFID tags. The degree of dimensional accuracy would depend not only on the locations of the beacons but also on the number of beacon signals received, the number of samples of each signal, the motion of the robot, and the time intervals between samples. At one extreme, a single sample of the return signal from a single beacon could be used to determine the distance from that beacon and hence to determine that the robot is located somewhere on a sphere, the radius of which equals that distance and the center of which lies at the beacon. In a less extreme example, the three-dimensional position of the robot could be determined with fair precision from a single sample of the signal from each of three beacons. In intermediate cases, position estimates could be refined and/or position ambiguities could be resolved by use of supplementary readings of an altimeter and other instruments aboard the robot.
Environments for online maritime simulators with cloud computing capabilities
NASA Astrophysics Data System (ADS)
Raicu, Gabriel; Raicu, Alexandra
2016-12-01
This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Sanderson, A. C.
1994-01-01
Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.
Indoor Navigation using Direction Sensor and Beacons
NASA Technical Reports Server (NTRS)
Shields, Joel; Jeganathan, Muthu
2004-01-01
A system for indoor navigation of a mobile robot includes (1) modulated infrared beacons at known positions on the walls and ceiling of a room and (2) a cameralike sensor, comprising a wide-angle lens with a position-sensitive photodetector at the focal plane, mounted in a known position and orientation on the robot. The system also includes a computer running special-purpose software that processes the sensor readings to obtain the position and orientation of the robot in all six degrees of freedom in a coordinate system embedded in the room.
Prototype crawling robotics system for remote visual inspection of high-mast light poles.
DOT National Transportation Integrated Search
1997-01-01
This report presents the results of a project to develop a crawling robotics system for the remote visual inspection of high-mast light poles in Virginia. The first priority of this study was to develop a simple robotics application that would reduce...
Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-02-21
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.
Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-01-01
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578
Remote magnetic navigation for mapping and ablating right ventricular outflow tract tachycardia.
Thornton, Andrew S; Jordaens, Luc J
2006-06-01
Navigation, mapping, and ablation in the right ventricular outflow tract (RVOT) can be difficult. Catheter navigation using external magnetic fields may allow more accurate mapping and ablation. The purpose of this study was to assess the feasibility of RVOT tachycardia ablation using remote magnetic navigation. Mapping and ablation were performed in eight patients with outflow tract ventricular arrhythmias. Tachycardia mapping was undertaken with a 64-polar basket catheter, followed by remote activation and pace-mapping using a magnetically enabled catheter. The area of interest was localized on the basket catheter in seven patients in whom an RVOT arrhythmia was identified. Remote navigation of the magnetic catheter to this area was followed by pace-mapping. Ablation was performed at the site of perfect pace-mapping, with earliest activation if possible. Acute success was achieved in all patients (median four applications). Median procedural time was 144 minutes, with 13.4 minutes of patient fluoroscopy time and 3.8 minutes of physician fluoroscopy time. No complications occurred. One recurrence occurred during follow-up (mean 366 days). RVOT tachycardias can be mapped and ablated using remote magnetic navigation, initially guided by a basket catheter. Precise activation and pace-mapping are possible. Remote magnetic navigation permitted low fluoroscopy exposure for the physician. Long-term results are promising.
Navigation system for autonomous mapper robots
NASA Astrophysics Data System (ADS)
Halbach, Marc; Baudoin, Yvan
1993-05-01
This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.
Visual environment recognition for robot path planning using template matched filters
NASA Astrophysics Data System (ADS)
Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto
2017-08-01
A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.
Using sensor habituation in mobile robots to reduce oscillatory movements in narrow corridors.
Chang, Carolina
2005-11-01
Habituation is a form of nonassociative learning observed in a variety of species of animals. Arguably, it is the simplest form of learning. Nonetheless, the ability to habituate to certain stimuli implies plastic neural systems and adaptive behaviors. This paper describes how computational models of habituation can be applied to real robots. In particular, we discuss the problem of the oscillatory movements observed when a Khepera robot navigates through narrow hallways using a biologically inspired neurocontroller. Results show that habituation to the proximity of the walls can lead to smoother navigation. Habituation to sensory stimulation to the sides of the robot does not interfere with the robot's ability to turn at dead ends and to avoid obstacles outside the hallway. This paper shows that simple biological mechanisms of learning can be adapted to achieve better performance in real mobile robots.
Adaptive Control Of Remote Manipulator
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1989-01-01
Robotic control system causes remote manipulator to follow closely reference trajectory in Cartesian reference frame in work space, without resort to computationally intensive mathematical model of robot dynamics and without knowledge of robot and load parameters. System, derived from linear multivariable theory, uses relatively simple feedforward and feedback controllers with model-reference adaptive control.
Virtual Reality Robotic Operation Simulations Using MEMICA Haptic System
NASA Technical Reports Server (NTRS)
Bar-Cohen, Y.; Mavroidis, C.; Bouzit, M.; Dolgin, B.; Harm, D. L.; Kopchok, G. E.; White, R.
2000-01-01
There is an increasing realization that some tasks can be performed significantly better by humans than robots but, due to associated hazards, distance, etc., only a robot can be employed. Telemedicine is one area where remotely controlled robots can have a major impact by providing urgent care at remote sites. In recent years, remotely controlled robotics has been greatly advanced. The robotic astronaut, "Robonaut," at NASA Johnson Space Center is one such example. Unfortunately, due to the unavailability of force and tactile feedback capability the operator must determine the required action using only visual feedback from the remote site, which limits the tasks that Robonaut can perform. There is a great need for dexterous, fast, accurate teleoperated robots with the operator?s ability to "feel" the environment at the robot's field. Recently, we conceived a haptic mechanism called MEMICA (Remote MEchanical MIrroring using Controlled stiffness and Actuators) that can enable the design of high dexterity, rapid response, and large workspace system. Our team is developing novel MEMICA gloves and virtual reality models to allow the simulation of telesurgery and other applications. The MEMICA gloves are designed to have a high dexterity, rapid response, and large workspace and intuitively mirror the conditions at a virtual site where a robot is simulating the presence of the human operator. The key components of MEMICA are miniature electrically controlled stiffness (ECS) elements and Electrically Controlled Force and Stiffness (ECFS) actuators that are based on the sue of Electro-Rheological Fluids (ERF). In this paper the design of the MEMICA system and initial experimental results are presented.
Multidisciplinary unmanned technology teammate (MUTT)
NASA Astrophysics Data System (ADS)
Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark
2013-01-01
The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.
Toward perception-based navigation using EgoSphere
NASA Astrophysics Data System (ADS)
Kawamura, Kazuhiko; Peters, R. Alan; Wilkes, Don M.; Koku, Ahmet B.; Sekman, Ali
2002-02-01
A method for perception-based egocentric navigation of mobile robots is described. Each robot has a local short-term memory structure called the Sensory EgoSphere (SES), which is indexed by azimuth, elevation, and time. Directional sensory processing modules write information on the SES at the location corresponding to the source direction. Each robot has a partial map of its operational area that it has received a priori. The map is populated with landmarks and is not necessarily metrically accurate. Each robot is given a goal location and a route plan. The route plan is a set of via-points that are not used directly. Instead, a robot uses each point to construct a Landmark EgoSphere (LES) a circular projection of the landmarks from the map onto an EgoSphere centered at the via-point. Under normal circumstances, the LES will be mostly unaffected by slight variations in the via-point location. Thus, the route plan is transformed into a set of via-regions each described by an LES. A robot navigates by comparing the next LES in its route plan to the current contents of its SES. It heads toward the indicated landmarks until its SES matches the LES sufficiently to indicate that the robot is near the suggested via-point. The proposed method is particularly useful for enabling the exchange of robust route informa-tion between robots under low data rate communications constraints. An example of such an exchange is given.
Seelye, Adriana M; Wild, Katherine V; Larimer, Nicole; Maxwell, Shoshana; Kearns, Peter; Kaye, Jeffrey A
2012-12-01
Remote telepresence provided by tele-operated robotics represents a new means for obtaining important health information, improving older adults' social and daily functioning and providing peace of mind to family members and caregivers who live remotely. In this study we tested the feasibility of use and acceptance of a remotely controlled robot with video-communication capability in independently living, cognitively intact older adults. A mobile remotely controlled robot with video-communication ability was placed in the homes of eight seniors. The attitudes and preferences of these volunteers and those of family or friends who communicated with them remotely via the device were assessed through survey instruments. Overall experiences were consistently positive, with the exception of one user who subsequently progressed to a diagnosis of mild cognitive impairment. Responses from our participants indicated that in general they appreciated the potential of this technology to enhance their physical health and well-being, social connectedness, and ability to live independently at home. Remote users, who were friends or adult children of the participants, were more likely to test the mobility features and had several suggestions for additional useful applications. Results from the present study showed that a small sample of independently living, cognitively intact older adults and their remote collaterals responded positively to a remote controlled robot with video-communication capabilities. Research is needed to further explore the feasibility and acceptance of this type of technology with a variety of patients and their care contacts.
A novel interface for the telementoring of robotic surgery.
Shin, Daniel H; Dalag, Leonard; Azhar, Raed A; Santomauro, Michael; Satkunasivam, Raj; Metcalfe, Charles; Dunn, Matthew; Berger, Andre; Djaladat, Hooman; Nguyen, Mike; Desai, Mihir M; Aron, Monish; Gill, Inderbir S; Hung, Andrew J
2015-08-01
To prospectively evaluate the feasibility and safety of a novel, second-generation telementoring interface (Connect(™) ; Intuitive Surgical Inc., Sunnyvale, CA, USA) for the da Vinci robot. Robotic surgery trainees were mentored during portions of robot-assisted prostatectomy and renal surgery cases. Cases were assigned as traditional in-room mentoring or remote mentoring using Connect. While viewing two-dimensional, real-time video of the surgical field, remote mentors delivered verbal and visual counsel, using two-way audio and telestration (drawing) capabilities. Perioperative and technical data were recorded. Trainee robotic performance was rated using a validated assessment tool by both mentors and trainees. The mentoring interface was rated using a multi-factorial Likert-based survey. The Mann-Whitney and t-tests were used to determine statistical differences. We enrolled 55 mentored surgical cases (29 in-room, 26 remote). Perioperative variables of operative time and blood loss were similar between in-room and remote mentored cases. Robotic skills assessment showed no significant difference (P > 0.05). Mentors preferred remote over in-room telestration (P = 0.05); otherwise no significant difference existed in evaluation of the interfaces. Remote cases using wired (vs wireless) connections had lower latency and better data transfer (P = 0.005). Three of 18 (17%) wireless sessions were disrupted; one was converted to wired, one continued after restarting Connect, and the third was aborted. A bipolar injury to the colon occurred during one (3%) in-room mentored case; no intraoperative injuries were reported during remote sessions. In a tightly controlled environment, the Connect interface allows trainee robotic surgeons to be telementored in a safe and effective manner while performing basic surgical techniques. Significant steps remain prior to widespread use of this technology. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Meintel, A. J., Jr.; Will, R. W.
1985-01-01
This presentation consists of four sections. The first section is a brief introduction to the NASA Space Program. The second portion summarized the results of a congressionally mandated study of automation and robotics for space station. The third portion presents a number of concepts for space teleoperator systems. The remainder of the presentation describes Langley Research Center's teleoperator/robotic research to support remote space operations.
Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot.
Duan, Xingguang; Gao, Liang; Wang, Yonggui; Li, Jianxi; Li, Haoyuan; Guo, Yanjun
2018-01-01
In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, "kinematics + optics" hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.
Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot
Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun
2018-01-01
In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948
A positional estimation technique for an autonomous land vehicle in an unstructured environment
NASA Technical Reports Server (NTRS)
Talluri, Raj; Aggarwal, J. K.
1990-01-01
This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.
Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits
NASA Astrophysics Data System (ADS)
Snider, Ross K.; Arathorn, David W.
2006-05-01
A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.
Robotic Astrobiology: Searching for Life with Rovers
NASA Astrophysics Data System (ADS)
Cabrol, N. A.; Wettergreen, D. S.; Team, L.
2006-05-01
The Life In The Atacama (LITA) project has developed and field tested a long-range, solar-powered, automated rover platform (Zoe) and a science payload assembled to search for microbial life in the Atacama desert. Life is hardly detectable over most of the extent of the driest desert on Earth. Its geological, climatic, and biological evolution provides a unique training ground for designing and testing exploration strategies and life detection methods for the robotic search for life on Mars. LITA opens the path to a new generation of rover missions that will transition from the current study of habitability (MER) to the upcoming search for, and study of, habitats and life on Mars. Zoe's science payload reflects this transition by combining complementary elements, some directed towards the remote sensing of the environment (geology, morphology, mineralogy, weather/climate) for the detection of conditions favorable to microbial habitats and oases along survey traverses, others directed toward the in situ detection of life' signatures (biological and physical, such as biological constructs and patterns). New exploration strategies specifically adapted to the search for microbial life were designed and successfully tested in the Atacama between 2003-2005. They required the development and implementation in the field of new technological capabilities, including navigation beyond the horizon, obstacle avoidance, and "science-on-the-fly" (automated detection of targets of science value), and that of new rover planning tools in the remote science operation center.
Wireless Cortical Brain-Machine Interface for Whole-Body Navigation in Primates
NASA Astrophysics Data System (ADS)
Rajangam, Sankaranarayani; Tseng, Po-He; Yin, Allen; Lehew, Gary; Schwarz, David; Lebedev, Mikhail A.; Nicolelis, Miguel A. L.
2016-03-01
Several groups have developed brain-machine-interfaces (BMIs) that allow primates to use cortical activity to control artificial limbs. Yet, it remains unknown whether cortical ensembles could represent the kinematics of whole-body navigation and be used to operate a BMI that moves a wheelchair continuously in space. Here we show that rhesus monkeys can learn to navigate a robotic wheelchair, using their cortical activity as the main control signal. Two monkeys were chronically implanted with multichannel microelectrode arrays that allowed wireless recordings from ensembles of premotor and sensorimotor cortical neurons. Initially, while monkeys remained seated in the robotic wheelchair, passive navigation was employed to train a linear decoder to extract 2D wheelchair kinematics from cortical activity. Next, monkeys employed the wireless BMI to translate their cortical activity into the robotic wheelchair’s translational and rotational velocities. Over time, monkeys improved their ability to navigate the wheelchair toward the location of a grape reward. The navigation was enacted by populations of cortical neurons tuned to whole-body displacement. During practice with the apparatus, we also noticed the presence of a cortical representation of the distance to reward location. These results demonstrate that intracranial BMIs could restore whole-body mobility to severely paralyzed patients in the future.
Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems.
Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika
2017-06-01
This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations.
Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems
NASA Astrophysics Data System (ADS)
Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika
2017-06-01
Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations.
Design, development and evaluation of a compact telerobotic catheter navigation system.
Tavallaei, Mohammad Ali; Gelman, Daniel; Lavdas, Michael Konstantine; Skanes, Allan C; Jones, Douglas L; Bax, Jeffrey S; Drangova, Maria
2016-09-01
Remote catheter navigation systems protect interventionalists from scattered ionizing radiation. However, these systems typically require specialized catheters and extensive operator training. A new compact and sterilizable telerobotic system is described, which allows remote navigation of conventional tip-steerable catheters, with three degrees of freedom, using an interface that takes advantage of the interventionalist's existing dexterity skills. The performance of the system is evaluated ex vivo and in vivo for remote catheter navigation and ablation delivery. The system has absolute errors of 0.1 ± 0.1 mm and 7 ± 6° over 100 mm of axial motion and 360° of catheter rotation, respectively. In vivo experiments proved the safety of the proposed telerobotic system and demonstrated the feasibility of remote navigation and delivery of ablation. The proposed telerobotic system allows the interventionalist to use conventional steerable catheters; while maintaining a safe distance from the radiation source, he/she can remotely navigate the catheter and deliver ablation lesions. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
An egocentric vision based assistive co-robot.
Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang
2013-06-01
We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.
33 CFR 117.42 - Remotely operated and automated drawbridges.
Code of Federal Regulations, 2010 CFR
2010-07-01
... SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS General Requirements § 117.42 Remotely operated and... authorize a drawbridge to operate under an automated system or from a remote location. (b) If the request is... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Remotely operated and automated...
Vision-based mapping with cooperative robots
NASA Astrophysics Data System (ADS)
Little, James J.; Jennings, Cullen; Murray, Don
1998-10-01
Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.
Reactive, Safe Navigation for Lunar and Planetary Robots
NASA Technical Reports Server (NTRS)
Utz, Hans; Ruland, Thomas
2008-01-01
When humans return to the moon, Astronauts will be accompanied by robotic helpers. Enabling robots to safely operate near astronauts on the lunar surface has the potential to significantly improve the efficiency of crew surface operations. Safely operating robots in close proximity to astronauts on the lunar surface requires reactive obstacle avoidance capabilities not available on existing planetary robots. In this paper we present work on safe, reactive navigation using a stereo based high-speed terrain analysis and obstacle avoidance system. Advances in the design of the algorithms allow it to run terrain analysis and obstacle avoidance algorithms at full frame rate (30Hz) on off the shelf hardware. The results of this analysis are fed into a fast, reactive path selection module, enforcing the safety of the chosen actions. The key components of the system are discussed and test results are presented.
Autonomous navigation system and method
Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID
2009-09-08
A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.
Markovian robots: Minimal navigation strategies for active particles
NASA Astrophysics Data System (ADS)
Nava, Luis Gómez; Großmann, Robert; Peruani, Fernando
2018-04-01
We explore minimal navigation strategies for active particles in complex, dynamical, external fields, introducing a class of autonomous, self-propelled particles which we call Markovian robots (MR). These machines are equipped with a navigation control system (NCS) that triggers random changes in the direction of self-propulsion of the robots. The internal state of the NCS is described by a Boolean variable that adopts two values. The temporal dynamics of this Boolean variable is dictated by a closed Markov chain—ensuring the absence of fixed points in the dynamics—with transition rates that may depend exclusively on the instantaneous, local value of the external field. Importantly, the NCS does not store past measurements of this value in continuous, internal variables. We show that despite the strong constraints, it is possible to conceive closed Markov chain motifs that lead to nontrivial motility behaviors of the MR in one, two, and three dimensions. By analytically reducing the complexity of the NCS dynamics, we obtain an effective description of the long-time motility behavior of the MR that allows us to identify the minimum requirements in the design of NCS motifs and transition rates to perform complex navigation tasks such as adaptive gradient following, detection of minima or maxima, or selection of a desired value in a dynamical, external field. We put these ideas in practice by assembling a robot that operates by the proposed minimalistic NCS to evaluate the robustness of MR, providing a proof of concept that is possible to navigate through complex information landscapes with such a simple NCS whose internal state can be stored in one bit. These ideas may prove useful for the engineering of miniaturized robots.
Highly dexterous 2-module soft robot for intra-organ navigation in minimally invasive surgery.
Abidi, Haider; Gerboni, Giada; Brancadoro, Margherita; Fras, Jan; Diodato, Alessandro; Cianchetti, Matteo; Wurdemann, Helge; Althoefer, Kaspar; Menciassi, Arianna
2018-02-01
For some surgical interventions, like the Total Mesorectal Excision (TME), traditional laparoscopes lack the flexibility to safely maneuver and reach difficult surgical targets. This paper answers this need through designing, fabricating and modelling a highly dexterous 2-module soft robot for minimally invasive surgery (MIS). A soft robotic approach is proposed that uses flexible fluidic actuators (FFAs) allowing highly dexterous and inherently safe navigation. Dexterity is provided by an optimized design of fluid chambers within the robot modules. Safe physical interaction is ensured by fabricating the entire structure by soft and compliant elastomers, resulting in a squeezable 2-module robot. An inner free lumen/chamber along the central axis serves as a guide of flexible endoscopic tools. A constant curvature based inverse kinematics model is also proposed, providing insight into the robot capabilities. Experimental tests in a surgical scenario using a cadaver model are reported, demonstrating the robot advantages over standard systems in a realistic MIS environment. Simulations and experiments show the efficacy of the proposed soft robot. Copyright © 2017 John Wiley & Sons, Ltd.
A Sustained Proximity Network for Multi-Mission Lunar Exploration
NASA Technical Reports Server (NTRS)
Soloff, Jason A.; Noreen, Gary; Deutsch, Leslie; Israel, David
2005-01-01
Tbe Vision for Space Exploration calls for an aggressive sequence of robotic missions beginning in 2008 to prepare for a human return to the Moon by 2020, with the goal of establishing a sustained human presence beyond low Earth orbit. A key enabler of exploration is reliable, available communication and navigation capabilities to support both human and robotic missions. An adaptable, sustainable communication and navigation architecture has been developed by Goddard Space Flight Center and the Jet Propulsion Laboratory to support human and robotic lunar exploration through the next two decades. A key component of the architecture is scalable deployment, with the infrastructure evolving as needs emerge, allowing NASA and its partner agencies to deploy an interoperable communication and navigation system in an evolutionary way, enabling cost effective, highly adaptable systems throughout the lunar exploration program.
A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots.
Sherwin, Tyrone; Easte, Mikala; Chen, Andrew Tzer-Yeu; Wang, Kevin I-Kai; Dai, Wenbin
2018-02-14
Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.
A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots
Sherwin, Tyrone; Easte, Mikala; Wang, Kevin I-Kai; Dai, Wenbin
2018-01-01
Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system. PMID:29443906
Seelye, Adriana M.; Larimer, Nicole; Maxwell, Shoshana; Kearns, Peter; Kaye, Jeffrey A.
2012-01-01
Abstract Objective: Remote telepresence provided by tele-operated robotics represents a new means for obtaining important health information, improving older adults' social and daily functioning and providing peace of mind to family members and caregivers who live remotely. In this study we tested the feasibility of use and acceptance of a remotely controlled robot with video-communication capability in independently living, cognitively intact older adults. Materials and Methods: A mobile remotely controlled robot with video-communication ability was placed in the homes of eight seniors. The attitudes and preferences of these volunteers and those of family or friends who communicated with them remotely via the device were assessed through survey instruments. Results: Overall experiences were consistently positive, with the exception of one user who subsequently progressed to a diagnosis of mild cognitive impairment. Responses from our participants indicated that in general they appreciated the potential of this technology to enhance their physical health and well-being, social connectedness, and ability to live independently at home. Remote users, who were friends or adult children of the participants, were more likely to test the mobility features and had several suggestions for additional useful applications. Conclusions: Results from the present study showed that a small sample of independently living, cognitively intact older adults and their remote collaterals responded positively to a remote controlled robot with video-communication capabilities. Research is needed to further explore the feasibility and acceptance of this type of technology with a variety of patients and their care contacts. PMID:23082794
Behavioral Mapless Navigation Using Rings
NASA Technical Reports Server (NTRS)
Monroe, Randall P.; Miller, Samuel A.; Bradley, Arthur T.
2012-01-01
This paper presents work on the development and implementation of a novel approach to robotic navigation. In this system, map-building and localization for obstacle avoidance are discarded in favor of moment-by-moment behavioral processing of the sonar sensor data. To accomplish this, we developed a network of behaviors that communicate through the passing of rings, data structures that are similar in form to the sonar data itself and express the decisions of each behavior. Through the use of these rings, behaviors can moderate each other, conflicting impulses can be mediated, and designers can easily connect modules to create complex emergent navigational techniques. We discuss the development of a number of these modules and their successful use as a navigation system in the Trinity omnidirectional robot.
NASA Technical Reports Server (NTRS)
Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.; Wilson, E.
1993-01-01
Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modeling and control of extremely flexible space structures.
Sample Return Robot Centennial Challenge
2012-06-16
NASA Deputy Administrator Lori Garver, left, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
NASA Deputy Administrator Lori Garver, right, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
NASA Astrophysics Data System (ADS)
Kaya, N.; Iwashita, M.; Nakasuka, S.; Summerer, L.; Mankins, J.
2004-12-01
Construction technology of huge structures is essential for the future space development as well as the Solar Power Satellite (SPS). The SPS needs huge antennas to transmit the generated electric power toward the ground, while the huge antenna have many useful applications in space as well as on the ground, for example, telecommunication for cellular phones, radars for remote sensing, navigation and observation, and so on. A parabola antenna was mostly used for the space antenna. However, it is very difficult for the larger parabola antenna to keep accuracy of the reflectors and the beam control, because the surfaces of the reflectors are mechanically supported and controlled. The huge space antenna with flexible and ultra-light structures is essential and necessary for the future applications. An active phased array antenna is more suitable and promising for the huge flexible antenna than the parabola antenna. We are proposing to apply the Furoshiki satellite [1] with robots for construction of the huge structures. While a web is deployed using the Furoshiki satellite in the same size of the huge antenna, all of the antenna elements crawl on the web with their own legs toward their allocated locations. We are verifying the deployment concept of the Furoshiki satellite using a sounding rocket with robots crawling on the deployed web. The robots are internationally being developed by NASA, ESA and Kobe University. The paper describes the concept of the crawling robot developed by Kobe University as well as the plan of the rocket experiment.
Robots, systems, and methods for hazard evaluation and visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.
A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less
Testbed for remote telepresence research
NASA Astrophysics Data System (ADS)
Adnan, Sarmad; Cheatham, John B., Jr.
1992-11-01
Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.
ERIC Educational Resources Information Center
Guo, Yi; Zhang, Shubo; Ritter, Arthur; Man, Hong
2014-01-01
Despite the increasing importance of robotics, there is a significant challenge involved in teaching this to undergraduate students in biomedical engineering (BME) and other related disciplines in which robotics techniques could be readily applied. This paper addresses this challenge through the development and pilot testing of a bio-microrobotics…
Utilization of robotic "remote presence" technology within North American intensive care units.
Reynolds, Eliza M; Grujovski, Andre; Wright, Tim; Foster, Michael; Reynolds, H Neal
2012-09-01
To describe remote presence robotic utilization and examine perceived physician impact upon care in the intensive care unit (ICU). Data were obtained from academic, university, community, and rural medical facilities in North America with remote presence robots used in ICUs. Objective utilization data were extracted from a continuous monitoring system. Physician data were obtained via an Internet-based survey. As of 2010, 56 remote presence robots were deployed in 25 North American ICUs. Of 10,872 robot activations recorded, 10,065 were evaluated. Three distinct utilization patterns were discovered. Combining all programs revealed a pattern that closely reflects diurnal ICU activity. The physician survey revealed staff are senior (75% >40 years old, 60% with >16 years of clinical practice), trained in and dedicated to critical care. Programs are mature (70% >3 years old) and operate in a decentralized system, originating from cities with >50,000 population and provided to cities >50,000 (80%). Of the robots, 46.6% are in academic facilities. Most physicians (80%) provide on-site and remote ICU care, with 60% and 73% providing routine or scheduled rounds, respectively. All respondents (100%) believed patient care and patient/family satisfaction were improved. Sixty-six percent perceived the technology was a "blessing," while 100% intend to continue using the technology. Remote presence robotic technology is deployed in ICUs with various patterns of utilization that, in toto, simulate normal ICU work flow. There is a high rate of deployment in academic ICUs, suggesting the intensivists shortage also affects large facilities. Physicians using the technology are generally senior, experienced, and dedicated to critical care and highly support the technology.
Robot-assisted home hazard assessment for fall prevention: a feasibility study.
Sadasivam, Rajani S; Luger, Tana M; Coley, Heather L; Taylor, Benjamin B; Padir, Taskin; Ritchie, Christine S; Houston, Thomas K
2014-01-01
We examined the feasibility of using a remotely manoeuverable robot to make home hazard assessments for fall prevention. We employed use-case simulations to compare robot assessments with in-person assessments. We screened the homes of nine elderly patients (aged 65 years or more) for fall risks using the HEROS screening assessment. We also assessed the participants' perspectives of the remotely-operated robot in a survey. The nine patients had a median Short Blessed Test score of 8 (interquartile range, IQR 2-20) and a median Life-Space Assessment score of 46 (IQR 27-75). Compared to the in-person assessment (mean = 4.2 hazards identified per participant), significantly more home hazards were perceived in the robot video assessment (mean = 7.0). Only two checklist items (adequate bedroom lighting and a clear path from bed to bathroom) had more than 60% agreement between in-person and robot video assessment. Participants were enthusiastic about the robot and did not think it violated their privacy. The study found little agreement between the in-person and robot video hazard assessments. However, it identified several research questions about how to best use remotely-operated robots.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P.
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394
Primate-inspired vehicle navigation using optic flow and mental rotations
NASA Astrophysics Data System (ADS)
Arkin, Ronald C.; Dellaert, Frank; Srinivasan, Natesh; Kerwin, Ryan
2013-05-01
Robot navigation already has many relatively efficient solutions: reactive control, simultaneous localization and mapping (SLAM), Rapidly-Exploring Random Trees (RRTs), etc. But many primates possess an additional inherent spatial reasoning capability: mental rotation. Our research addresses the question of what role, if any, mental rotations can play in enhancing existing robot navigational capabilities. To answer this question we explore the use of optical flow as a basis for extracting abstract representations of the world, comparing these representations with a goal state of similar format and then iteratively providing a control signal to a robot to allow it to move in a direction consistent with achieving that goal state. We study a range of transformation methods to implement the mental rotation component of the architecture, including correlation and matching based on cognitive studies. We also include a discussion of how mental rotations may play a key role in understanding spatial advice giving, particularly from other members of the species, whether in map-based format, gestures, or other means of communication. Results to date are presented on our robotic platform.
Benefits of Using Remotely Operated Vehicles to Inspect USACE Navigation Structures
2007-03-01
ER D C/ CR R EL T R -0 7 -4 Benefits of Using Remotely Operated Vehicles to Inspect USACE Navigation Structures James H. Lever, Gary E...release; distribution is unlimited. ERDC/CRREL TR-07-4 March 2007 Benefits of Using Remotely Operated Vehicles to Inspect USACE Navigation...with inspections using divers or dewatering. In each case, benefits from reduced labor costs, shipping delays, and lost power production far exceed
NASA Technical Reports Server (NTRS)
Mavroidis, Constantinos; Pfeiffer, Charles; Paljic, Alex; Celestino, James; Lennon, Jamie; Bar-Cohen, Yoseph
2000-01-01
For many years, the robotic community sought to develop robots that can eventually operate autonomously and eliminate the need for human operators. However, there is an increasing realization that there are some tasks that human can perform significantly better but, due to associated hazards, distance, physical limitations and other causes, only robot can be employed to perform these tasks. Remotely performing these types of tasks requires operating robots as human surrogates. While current "hand master" haptic systems are able to reproduce the feeling of rigid objects, they present great difficulties in emulating the feeling of remote/virtual stiffness. In addition, they tend to be heavy, cumbersome and usually they only allow limited operator workspace. In this paper a novel haptic interface is presented to enable human-operators to "feel" and intuitively mirror the stiffness/forces at remote/virtual sites enabling control of robots as human-surrogates. This haptic interface is intended to provide human operators intuitive feeling of the stiffness and forces at remote or virtual sites in support of space robots performing dexterous manipulation tasks (such as operating a wrench or a drill). Remote applications are referred to the control of actual robots whereas virtual applications are referred to simulated operations. The developed haptic interface will be applicable to IVA operated robotic EVA tasks to enhance human performance, extend crew capability and assure crew safety. The electrically controlled stiffness is obtained using constrained ElectroRheological Fluids (ERF), which changes its viscosity under electrical stimulation. Forces applied at the robot end-effector due to a compliant environment will be reflected to the user using this ERF device where a change in the system viscosity will occur proportionally to the force to be transmitted. In this paper, we will present the results of our modeling, simulation, and initial testing of such an electrorheological fluid (ERF) based haptic device.
Research state-of-the-art of mobile robots in China
NASA Astrophysics Data System (ADS)
Wu, Lin; Zhao, Jinglun; Zhang, Peng; Li, Shiqing
1991-03-01
Several newly developed mobile robots in china are described in the paper. It includes masterslave telerobot sixleged robot biped walking robot remote inspection robot crawler moving robot and autonomous mobi le vehicle . Some relevant technology are also described.
2018-02-12
usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4
Goal-oriented robot navigation learning using a multi-scale space representation.
Llofriu, M; Tejera, G; Contreras, M; Pelc, T; Fellous, J M; Weitzenfeld, A
2015-12-01
There has been extensive research in recent years on the multi-scale nature of hippocampal place cells and entorhinal grid cells encoding which led to many speculations on their role in spatial cognition. In this paper we focus on the multi-scale nature of place cells and how they contribute to faster learning during goal-oriented navigation when compared to a spatial cognition system composed of single scale place cells. The task consists of a circular arena with a fixed goal location, in which a robot is trained to find the shortest path to the goal after a number of learning trials. Synaptic connections are modified using a reinforcement learning paradigm adapted to the place cells multi-scale architecture. The model is evaluated in both simulation and physical robots. We find that larger scale and combined multi-scale representations favor goal-oriented navigation task learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tachi, Susumu; Kawakami, Naoki; Nii, Hideaki; Watanabe, Kouichi; Minamizawa, Kouta
TELEsarPHONE is a conceptual prototype of a mutual telexistence system, designed for face-to-face telecommunication via robots. Because of the development of telexistence technology, we can acquire a feeling that we are present in several actual remote places using remote robots as our surrogates and can work and act freely there. However, people in the place where someone telexists using a robot see only the robot, and they cannot feel the existence of the telexisting person. Mutual telexistence aims to solve this problem so that the existence of a telexisting person (visitor) is apparent to the people in the remote environment by providing mutual sensations of presence. On the basis of the concept of mutual telexistence, we have designed and developed a prototype of a telexistence master-slave system for remote communication by applying retroreflective projection technology. In the TELEsarPHONE system, the face and chest of the slave robot TELESAR II are covered by retroreflective material. To provide the feeling of existence, the real-time image of the visitor is projected onto the robot so that people can see the visitor in real time.
Tele-existence and/or cybernetic interface studies in Japan
NASA Technical Reports Server (NTRS)
Tachi, Susumu
1991-01-01
Tele-existence aims at a natural and efficient remote control of robots by providing the operator with a real time sensation of presence. It is an advaced type of teleoperation system which enables a human operator at the controls to perform remote manipulation tasks dexterously with the feeling that he or she exists in one of the remote anthropomorphic robots in the remote environment, e.g., in a hostile environment such as those of nuclear radiation, high temperature, and deep space. In order to study the use of the tele-existence system in the artificially constructed environment, the visual tele-existence simulator has been designed, a pseudo-real-time binocular solid model robot simulator has been made, and its feasibility has been experimentally evaluated. An anthropomorphic robot mechanism with an arm having seven degrees of freedom has been designed and developed as a slave robot for feasibility experiments of teleoperation using the tele-existence method. An impedance controlled active display mechanism and a head mounted display have also been designed and developed as the display subsystem for the master. The robot's structural dimensions are set very close to those of humans.
Volonté, Francesco; Pugin, François; Bucher, Pascal; Sugimoto, Maki; Ratib, Osman; Morel, Philippe
2011-07-01
New technologies can considerably improve preoperative planning, enhance the surgeon's skill and simplify the approach to complex procedures. Augmented reality techniques, robot assisted operations and computer assisted navigation tools will become increasingly important in surgery and in residents' education. We obtained 3D reconstructions from simple spiral computed tomography (CT) slides using OsiriX, an open source processing software package dedicated to DICOM images. These images were then projected on the patient's body with a beamer fixed to the operating table to enhance spatial perception during surgical intervention (augmented reality). Changing a window's deepness level allowed the surgeon to navigate through the patient's anatomy, highlighting regions of interest and marked pathologies. We used image overlay navigation for laparoscopic operations such cholecystectomy, abdominal exploration, distal pancreas resection and robotic liver resection. Augmented reality techniques will transform the behaviour of surgeons, making surgical interventions easier, faster and probably safer. These new techniques will also renew methods of surgical teaching, facilitating transmission of knowledge and skill to young surgeons.
Virtual modeling of robot-assisted manipulations in abdominal surgery.
Berelavichus, Stanislav V; Karmazanovsky, Grigory G; Shirokov, Vadim S; Kubyshkin, Valeriy A; Kriger, Andrey G; Kondratyev, Evgeny V; Zakharova, Olga P
2012-06-27
To determine the effectiveness of using multidetector computed tomography (MDCT) data in preoperative planning of robot-assisted surgery. Fourteen patients indicated for surgery underwent MDCT using 64 and 256-slice MDCT. Before the examination, a specially constructed navigation net was placed on the patient's anterior abdominal wall. Processing of MDCT data was performed on a Brilliance Workspace 4 (Philips). Virtual vectors that imitate robotic and assistant ports were placed on the anterior abdominal wall of the 3D model of the patient, considering the individual anatomy of the patient and the technical capabilities of robotic arms. Sites for location of the ports were directed by projection on the roentgen-positive tags of the navigation net. There were no complications observed during surgery or in the post-operative period. We were able to reduce robotic arm interference during surgery. The surgical area was optimal for robotic and assistant manipulators without any need for reinstallation of the trocars. This method allows modeling of the main steps in robot-assisted intervention, optimizing operation of the manipulator and lowering the risk of injuries to internal organs.
Obstacle Avoidance On Roadways Using Range Data
NASA Astrophysics Data System (ADS)
Dunlay, R. Terry; Morgenthaler, David G.
1987-02-01
This report describes range data based obstacle avoidance techniques developed for use on an autonomous road-following robot vehicle. The purpose of these techniques is to detect and locate obstacles present in a road environment for navigation of a robot vehicle equipped with an active laser-based range sensor. Techniques are presented for obstacle detection, obstacle location, and coordinate transformations needed in the construction of Scene Models (symbolic structures representing the 3-D obstacle boundaries used by the vehicle's Navigator for path planning). These techniques have been successfully tested on an outdoor robotic vehicle, the Autonomous Land Vehicle (ALV), at speeds up to 3.5 km/hour.
Interactive intelligent remote operations: application to space robotics
NASA Astrophysics Data System (ADS)
Dupuis, Erick; Gillett, G. R.; Boulanger, Pierre; Edwards, Eric; Lipsett, Michael G.
1999-11-01
A set of tolls addressing the problems specific to the control and monitoring of remote robotic systems from extreme distances has been developed. The tools include the capability to model and visualize the remote environment, to generate and edit complex task scripts, to execute the scripts to supervisory control mode and to monitor and diagnostic equipment from multiple remote locations. Two prototype systems are implemented for demonstration. The first demonstration, using a prototype joint design called Dexter, shows the applicability of the approach to space robotic operation in low Earth orbit. The second demonstration uses a remotely controlled excavator in an operational open-pit tar sand mine. This demonstrates that the tools developed can also be used for planetary exploration operations as well as for terrestrial mining applications.
Virtual collaborative environments: programming and controlling robotic devices remotely
NASA Astrophysics Data System (ADS)
Davies, Brady R.; McDonald, Michael J., Jr.; Harrigan, Raymond W.
1995-12-01
This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.
A New Paradigm for Robotic Rovers
NASA Astrophysics Data System (ADS)
Clark, P. E.; Curtis, S. A.; Rilee, M. L.
We are in the process of developing rovers with extreme mobility needed to explore remote, rugged terrain. We call these systems Tetrahedral Explorer Technologies (TETs). Architecture is based on conformable tetrahedra, the simplest space-filling form, as building blocks, single or networked, where apices act as nodes from which struts reversibly deploy. The tetrahedral framework acts as a simple skeletal muscular structure. We have already prototyped a simple robotic walker from a single reconfigurable tetrahedron capable of tumbling and a more evolved 12Tetrahedral Walker, the Autonomous Landed Investigator (ALI), which has interior nodes for payload, more continuous motion, and is commandable through a user friendly interface. ALI is an EMS level mission concept which would allow autonomous in situ exploration of the lunar poles within the next decade. ALI would consist of one or more 12tetrahedral walkers capable of rapid locomotion with the many degrees of freedom and equipped for navigation in the unilluminated, inaccessible and thus largely unexplored rugged terrains where lunar resources are likely to be found: the Polar Regions. ALI walkers would act as roving reconnaissance teams for unexplored regions, analyzing samples along the way.
Automating CapCom Using Mobile Agents and Robotic Assistants
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Alena, Richard L.; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail; Shum, Simon J. Buckingham; Shadbolt, Nigel;
2007-01-01
Mobile Agents (MA) is an advanced Extra-Vehicular Activity (EVA) communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. MA is voice controlled and provides information verbally to the astronauts through programs called "personal agents." The system partly automates the role of CapCom in Apollo-including monitoring and managing navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. Data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in authentic work contexts, including six years of ethnographic observation of field geology. Analog field experiments in Utah enabled empirically discovering requirements and testing alternative technologies and protocols. We report on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to serve as a relay on the wireless network.
Real-time adaptive off-road vehicle navigation and terrain classification
NASA Astrophysics Data System (ADS)
Muller, Urs A.; Jackel, Lawrence D.; LeCun, Yann; Flepp, Beat
2013-05-01
We are developing a complete, self-contained autonomous navigation system for mobile robots that learns quickly, uses commodity components, and has the added benefit of emitting no radiation signature. It builds on the autonomous navigation technology developed by Net-Scale and New York University during the Defense Advanced Research Projects Agency (DARPA) Learning Applied to Ground Robots (LAGR) program and takes advantage of recent scientific advancements achieved during the DARPA Deep Learning program. In this paper we will present our approach and algorithms, show results from our vision system, discuss lessons learned from the past, and present our plans for further advancing vehicle autonomy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
EISLER, G. RICHARD
This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Robust Planning for Autonomous Navigation of Mobile Robots In Unstructured, Dynamic Environments (AutoNav)''. The project goal was to develop an algorithmic-driven, multi-spectral approach to point-to-point navigation characterized by: segmented on-board trajectory planning, self-contained operation without human support for mission duration, and the development of appropriate sensors and algorithms to navigate unattended. The project was partially successful in achieving gains in sensing, path planning, navigation, and guidance. One of three experimental platforms, the Minimalist Autonomous Testbed, used a repetitive sense-and-re-plan combination to demonstratemore » the majority of elements necessary for autonomous navigation. However, a critical goal for overall success in arbitrary terrain, that of developing a sensor that is able to distinguish true obstacles that need to be avoided as a function of vehicle scale, still needs substantial research to bring to fruition.« less
Vanguard: a Mars exobiology mission proposal using robotic elements
NASA Astrophysics Data System (ADS)
Ellery, A.; Richter, L.; Kolb, C.; Lammer, H.; Parnell, J.; Bertrand, R.; Ball, A.; Patel, M.; Coste, P.; McKee, G.
2003-04-01
We present a new proposal for a European exobiology-focussed robotic Mars mission. This mission is presented as a low-cost successor to the Mars Express/Beagle2 mission. The Mars surface segment is designed within the payload constraints of the current Mars Express bus spacecraft with a mass of 126 kg including the Entry, Descent and Landing System (EDLS). EDLS will be similar to that employed for Beagle2 and Mars Pathfinder. The surface segment will have a total mass of 66 kg including a 34 kg lander, a 26 kg micro-rover and three 1.6 kg moles. The exobiology focus requires that investigation of the Martian sub-surface, below the oxidised layer, be undertaken in search of biomolecular species. The currently favoured site for deployment is the Gusev palaeolake crater. The moles are mounted vertically to the rear of the micro-rover which will enable a surface traverse of 1-5 km. Each molewill be deployed sequentially at different sites selected during the mission operation. Each mole will penetrate below the projected depth of the oxidised layer (estimated at 2-3m depth) to a total depth of 5m. The micro-rover will carry the main scientific instrument pack of a combined confocal imager, Raman spectrometer, infrared spectrometer and laser plasma spectrometer. Each of these instruments enables remote sensing of mineralogy, elemental abundance, biomolecules and water signatures with depth. The implementation of a dedicated tether to each mole from the micro-rover provides the provision of power and optical fibre links from the instruments to the sub-surface targets. As remote sensing instruments, there is no requirement for the recovery of physical samples, eliminating much of the complexity inherent in recovering the moles. Each mole is thus deployed on a single one-way trajectory to maximum depth on which the tether is severed. A minimum of three moles is considered essential in providing replicated depth profile data sets. Furthermore, the mission has a specific technology demonstration component to it in providing a basic demonstration of water-mining as part of an in-situ resource utilisation validation programme - this will be achieved using zeolite caps deployed at the top of each borehole. There are a number of robotics issues inherent in this proposal. First, the micro-rover traverse requires extensive onboard navigation capabilities - we are investigating the use of the elastic loop mobility system for surface negotiation and potential fields as the mode of near-autonomous navigation. Second, the single direction mole trajectory will require a sophisticated onboard expert system to quick-look analyse depth profile data to make decisions on the control of the mole. The Vanguard mission represents a low-cost robotic Mars mission with a high scientific return and a significant demonstration of robotic technologies required for future Mars missions. We are currently proposing Vanguard as an Aurora Arrow mission to complement the Aurora ExoMars flagship mission.
Evaluation of a novel flexible snake robot for endoluminal surgery.
Patel, Nisha; Seneci, Carlo A; Shang, Jianzhong; Leibrandt, Konrad; Yang, Guang-Zhong; Darzi, Ara; Teare, Julian
2015-11-01
Endoluminal therapeutic procedures such as endoscopic submucosal dissection are increasingly attractive given the shift in surgical paradigm towards minimally invasive surgery. This novel three-channel articulated robot was developed to overcome the limitations of the flexible endoscope which poses a number of challenges to endoluminal surgery. The device enables enhanced movement in a restricted workspace, with improved range of motion and with the accuracy required for endoluminal surgery. To evaluate a novel flexible robot for therapeutic endoluminal surgery. Bench-top studies. Research laboratory. Targeting and navigation tasks of the robot were performed to explore the range of motion and retroflexion capabilities. Complex endoluminal tasks such as endoscopic mucosal resection were also simulated. Successful completion, accuracy and time to perform the bench-top tasks were the main outcome measures. The robot ranges of movement, retroflexion and navigation capabilities were demonstrated. The device showed significantly greater accuracy of targeting in a retroflexed position compared to a conventional endoscope. Bench-top study and small study sample. We were able to demonstrate a number of simulated endoscopy tasks such as navigation, targeting, snaring and retroflexion. The improved accuracy of targeting whilst in a difficult configuration is extremely promising and may facilitate endoluminal surgery which has been notoriously challenging with a conventional endoscope.
McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T
2018-02-01
Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Benavides, Jose; Provencher, Chris; Bualat, Maria; Smith, Marion F.; Mora Vargas, Andres
2017-01-01
At the end of 2017, Astrobee will launch three free-flying robots that will navigate the entire US segment of the ISS (International Space Station) and serve as a payload facility. These robots will provide guest science payloads with processor resources, space within the robot for physical attachment, power, communication, propulsion, and human interfaces.
Robotic Technology Development at Ames: The Intelligent Robotics Group and Surface Telerobotics
NASA Technical Reports Server (NTRS)
Bualat, Maria; Fong, Terrence
2013-01-01
Future human missions to the Moon, Mars, and other destinations offer many new opportunities for exploration. But, astronaut time will always be limited and some work will not be feasible for humans to do manually. Robots, however, can complement human explorers, performing work autonomously or under remote supervision from Earth. Since 2004, the Intelligent Robotics Group has been working to make human-robot interaction efficient and effective for space exploration. A central focus of our research has been to develop and field test robots that benefit human exploration. Our approach is inspired by lessons learned from the Mars Exploration Rovers, as well as human spaceflight programs, including Apollo, the Space Shuttle, and the International Space Station. We conduct applied research in computer vision, geospatial data systems, human-robot interaction, planetary mapping and robot software. In planning for future exploration missions, architecture and study teams have made numerous assumptions about how crew can be telepresent on a planetary surface by remotely operating surface robots from space (i.e. from a flight vehicle or deep space habitat). These assumptions include estimates of technology maturity, existing technology gaps, and likely operational and functional risks. These assumptions, however, are not grounded by actual experimental data. Moreover, no crew-controlled surface telerobotic system has yet been fully tested, or rigorously validated, through flight testing. During Summer 2013, we conducted a series of tests to examine how astronauts in the International Space Station (ISS) can remotely operate a planetary rover across short time delays. The tests simulated portions of a proposed human-robotic Lunar Waypoint mission, in which astronauts in lunar orbit remotely operate a planetary rover on the lunar Farside to deploy a radio telescope array. We used these tests to obtain baseline-engineering data.
A Mobile Robot for Remote Response to Incidents Involving Hazardous Materials
NASA Technical Reports Server (NTRS)
Welch, Richard V.
1994-01-01
This paper will describe a teleoperated mobile robot system being developed at JPL for use by the JPL Fire Department/HAZMAT Team. The project, which began in October 1990, is focused on prototyping a robotic vehicle which can be quickly deployed and easily operated by HAZMAT Team personnel allowing remote entry and exploration of a hazardous material incident site. The close involvement of JPL Fire Department personnel has been critical in establishing system requirements as well as evaluating the system. The current robot, called HAZBOT III, has been especially designed for operation in environments that may contain combustible gases. Testing of the system with the Fire Department has shown that teleoperated robots can successfully gain access to incident sites allowing hazardous material spills to be remotely located and identified. Work is continuing to enable more complex missions through enhancement of the operator interface and by allowing tetherless operation.
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Tso, Kam S. (Inventor)
1993-01-01
This invention relates to an operator interface for controlling a telerobot to perform tasks in a poorly modeled environment and/or within unplanned scenarios. The telerobot control system includes a remote robot manipulator linked to an operator interface. The operator interface includes a setup terminal, simulation terminal, and execution terminal for the control of the graphics simulator and local robot actuator as well as the remote robot actuator. These terminals may be combined in a single terminal. Complex tasks are developed from sequential combinations of parameterized task primitives and recorded teleoperations, and are tested by execution on a graphics simulator and/or local robot actuator, together with adjustable time delays. The novel features of this invention include the shared and supervisory control of the remote robot manipulator via operator interface by pretested complex tasks sequences based on sequences of parameterized task primitives combined with further teleoperation and run-time binding of parameters based on task context.
Designing speech-based interfaces for telepresence robots for people with disabilities.
Tsui, Katherine M; Flynn, Kelsey; McHugh, Amelia; Yanco, Holly A; Kontak, David
2013-06-01
People with cognitive and/or motor impairments may benefit from using telepresence robots to engage in social activities. To date, these robots, their user interfaces, and their navigation behaviors have not been designed for operation by people with disabilities. We conducted an experiment in which participants (n=12) used a telepresence robot in a scavenger hunt task to determine how they would use speech to command the robot. Based upon the results, we present design guidelines for speech-based interfaces for telepresence robots.
Plinkert, P K; Federspil, P A; Plinkert, B; Henrich, D
2002-03-01
Excellent precision, miss of retiring, reproducibility are main characteristics of robots in the operating theatre. Because of these facts their use for surgery in the lateral scull base is of great interest. In recent experiments we determined process parameters for robot assisted reaming of a cochlea implant bed and for a mastoidectomy. These results suggested that optimizing parameters for thrilling with the robot is needed. Therefore we implemented a suitable reaming curve from the geometrical data of the implant and a force controlled process control for robot assisted reaming at the lateral scull base. Experiments were performed with an industrial robot on animal and human scull base specimen. Because of online force detection and feedback of sensory data the reaming with the robot was controlled. With increasing force values above a defined limit feed rates were automatically regulated. Furthermore we were able to detect contact of the thrill to dura mater by analyzing the force values. With the new computer program the desired implant bed was exactly prepared. Our examinations showed a successful reaming of an implant bed in the lateral scull base with a robot. Because of a force controlled reaming process locale navigation is possible and enables careful thrilling with a robot.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-12-26
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.
Basic Operational Robotics Instructional System
NASA Technical Reports Server (NTRS)
Todd, Brian Keith; Fischer, James; Falgout, Jane; Schweers, John
2013-01-01
The Basic Operational Robotics Instructional System (BORIS) is a six-degree-of-freedom rotational robotic manipulator system simulation used for training of fundamental robotics concepts, with in-line shoulder, offset elbow, and offset wrist. BORIS is used to provide generic robotics training to aerospace professionals including flight crews, flight controllers, and robotics instructors. It uses forward kinematic and inverse kinematic algorithms to simulate joint and end-effector motion, combined with a multibody dynamics model, moving-object contact model, and X-Windows based graphical user interfaces, coordinated in the Trick Simulation modeling environment. The motivation for development of BORIS was the need for a generic system for basic robotics training. Before BORIS, introductory robotics training was done with either the SRMS (Shuttle Remote Manipulator System) or SSRMS (Space Station Remote Manipulator System) simulations. The unique construction of each of these systems required some specialized training that distracted students from the ideas and goals of the basic robotics instruction.
NASA Astrophysics Data System (ADS)
Popov, E. P.; Iurevich, E. I.
The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.
Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development.
Bostelman, Roger; Hong, Tsai; Legowik, Steven
2016-01-01
Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems.
Autonomous navigation method for substation inspection robot based on travelling deviation
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting
2017-06-01
A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.
Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development
Bostelman, Roger; Hong, Tsai; Legowik, Steven
2017-01-01
Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems. PMID:28690359
Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1987-01-01
Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.
Solving Navigational Uncertainty Using Grid Cells on Robots
Milford, Michael J.; Wiles, Janet; Wyeth, Gordon F.
2010-01-01
To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our study to be a starting point for animal experiments that test navigation in perceptually ambiguous environments. PMID:21085643
NASA Technical Reports Server (NTRS)
Welch, Richard V.; Edmonds, Gary O.
1994-01-01
The use of robotics in situations involving hazardous materials can significantly reduce the risk of human injuries. The Emergency Response Robotics Project, which began in October 1990 at the Jet Propulsion Laboratory, is developing a teleoperated mobile robot allowing HAZMAT (hazardous materials) teams to remotely respond to incidents involving hazardous materials. The current robot, called HAZBOT III, can assist in locating characterizing, identifying, and mitigating hazardous material incidents without risking entry team personnel. The active involvement of the JPL Fire Department HAZMAT team has been vital in developing a robotic system which enables them to perform remote reconnaissance of a HAZMAT incident site. This paper provides a brief review of the history of the project, discusses the current system in detail, and presents other areas in which robotics can be applied removing people from hazardous environments/operations.
The Evolution of Computer-Assisted Total Hip Arthroplasty and Relevant Applications.
Chang, Jun-Dong; Kim, In-Sung; Bhardwaj, Atul M; Badami, Ramachandra N
2017-03-01
In total hip arthroplasty (THA), the accurate positioning of implants is the key to achieve a good clinical outcome. Computer-assisted orthopaedic surgery (CAOS) has been developed for more accurate positioning of implants during the THA. There are passive, semi-active, and active systems in CAOS for THA. Navigation is a passive system that only provides information and guidance to the surgeon. There are 3 types of navigation: imageless navigation, computed tomography (CT)-based navigation, and fluoroscopy-based navigation. In imageless navigation system, a new method of registration without the need to register the anterior pelvic plane was introduced. CT-based navigation can be efficiently used for pelvic plane reference, the functional pelvic plane in supine which adjusts anterior pelvic plane sagittal tilt for targeting the cup orientation. Robot-assisted system can be either active or semi-active. The active robotic system performs the preparation for implant positioning as programmed preoperatively. It has been used for only femoral implant cavity preparation. Recently, program for cup positioning was additionally developed. Alternatively, for ease of surgeon acceptance, semi-active robot systems are developed. It was initially applied only for cup positioning. However, with the development of enhanced femoral workflows, this system can now be used to position both cup and stem. Though there have been substantial advancements in computer-assisted THA, its use can still be controversial at present due to the steep learning curve, intraoperative technical issues, high cost and etc. However, in the future, CAOS will certainly enable the surgeon to operate more accurately and lead to improved outcomes in THA as the technology continues to evolve rapidly.
Ultra wide-band localization and SLAM: a comparative study for mobile robot navigation.
Segura, Marcelo J; Auat Cheein, Fernando A; Toibero, Juan M; Mut, Vicente; Carelli, Ricardo
2011-01-01
In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work.
Ernst, Sabine; Chun, Julian K R; Koektuerk, Buelent; Kuck, Karl-Heinz
2009-01-01
We report on a 63-year-old female patient in whom an electrophysiologic study discovered a hemi-azygos continuation. Using the magnetic navigation system, remote-controlled ablation was performed in conjunction with the 3D electroanatomical mapping system. Failing the attempt to advance a diagnostic catheter from the femoral vein, a diagnostic catheter was advanced via the left subclavian vein into the coronary sinus. The soft magnetic catheter was positioned in the right atrium via the hemi-azygos vein, and 3D mapping demonstrated an ectopic atrial tachycardia. Successful ablation was performed entirely remote controlled. Fluoroscopy time was only 7.1 minutes, of which 45 seconds were required during remote navigation. Remote-controlled catheter ablation using magnetic navigation in conjunction with the electroanatomical mapping system proved to be a valuable tool to perform successful ablation in the presence of a hemi-azygos continuation.
NASA Technical Reports Server (NTRS)
Tachi, Susumu; Arai, Hirohiko; Maeda, Taro
1989-01-01
Tele-existence is an advanced type of teleoperation system that enables a human operator at the controls to perform remote manipulation tasks dexterously with the feeling that he or she exists in the remote anthropomorphic robot in the remote environment. The concept of a tele-existence is presented, the principle of the tele-existence display method is explained, some of the prototype systems are described, and its space application is discussed.
Closing the Loop: Control and Robot Navigation in Wireless Sensor Networks
2006-09-05
University of California at Berkeley Technical Report No. UCB/EECS- 2006 -112 http://www.eecs.berkeley.edu/Pubs/TechRpts/ 2006 /EECS- 2006 -112.html September 5... 2006 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1...DATE 05 SEP 2006 2. REPORT TYPE 3. DATES COVERED 00-00- 2006 to 00-00- 2006 4. TITLE AND SUBTITLE Closing the Loop: Control and Robot Navigation in
Learning Probabilistic Features for Robotic Navigation Using Laser Sensors
Aznar, Fidel; Pujol, Francisco A.; Pujol, Mar; Rizo, Ramón; Pujol, María-José
2014-01-01
SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N 2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used. PMID:25415377
Learning probabilistic features for robotic navigation using laser sensors.
Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José
2014-01-01
SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.
Perception system and functions for autonomous navigation in a natural environment
NASA Technical Reports Server (NTRS)
Chatila, Raja; Devy, Michel; Lacroix, Simon; Herrb, Matthieu
1994-01-01
This paper presents the approach, algorithms, and processes we developed for the perception system of a cross-country autonomous robot. After a presentation of the tele-programming context we favor for intervention robots, we introduce an adaptive navigation approach, well suited for the characteristics of complex natural environments. This approach lead us to develop a heterogeneous perception system that manages several different terrain representatives. The perception functionalities required during navigation are listed, along with the corresponding representations we consider. The main perception processes we developed are presented. They are integrated within an on-board control architecture we developed. First results of an ambitious experiment currently underway at LAAS are then presented.
Towards Supervising Remote Dexterous Robots Across Time Delay
NASA Technical Reports Server (NTRS)
Hambuchen, Kimberly; Bluethmann, William; Goza, Michael; Ambrose, Robert; Wheeler, Kevin; Rabe, Ken
2006-01-01
The President s Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling dexterous robots under intermediate time delay is presented, in which software running within a ground control cockpit predicts the intention of an immersed robot supervisor, then the remote robot autonomously executes the supervisor s intended tasks. Initial results are presented.
High Speed Lunar Navigation for Crewed and Remotely Piloted Vehicles
NASA Technical Reports Server (NTRS)
Pedersen, L.; Allan, M.; To, V.; Utz, H.; Wojcikiewicz, W.; Chautems, C.
2010-01-01
Increased navigation speed is desirable for lunar rovers, whether autonomous, crewed or remotely operated, but is hampered by the low gravity, high contrast lighting and rough terrain. We describe lidar based navigation system deployed on NASA's K10 autonomous rover and to increase the terrain hazard situational awareness of the Lunar Electric Rover crew.
Mobile robot navigation modulated by artificial emotions.
Lee-Johnson, C P; Carnegie, D A
2010-04-01
For artificial intelligence research to progress beyond the highly specialized task-dependent implementations achievable today, researchers may need to incorporate aspects of biological behavior that have not traditionally been associated with intelligence. Affective processes such as emotions may be crucial to the generalized intelligence possessed by humans and animals. A number of robots and autonomous agents have been created that can emulate human emotions, but the majority of this research focuses on the social domain. In contrast, we have developed a hybrid reactive/deliberative architecture that incorporates artificial emotions to improve the general adaptive performance of a mobile robot for a navigation task. Emotions are active on multiple architectural levels, modulating the robot's decisions and actions to suit the context of its situation. Reactive emotions interact with the robot's control system, altering its parameters in response to appraisals from short-term sensor data. Deliberative emotions are learned associations that bias path planning in response to eliciting objects or events. Quantitative results are presented that demonstrate situations in which each artificial emotion can be beneficial to performance.
Neural networks for satellite remote sensing and robotic sensor interpretation
NASA Astrophysics Data System (ADS)
Martens, Siegfried
Remote sensing of forests and robotic sensor fusion can be viewed, in part, as supervised learning problems, mapping from sensory input to perceptual output. This dissertation develops ARTMAP neural networks for real-time category learning, pattern recognition, and prediction tailored to remote sensing and robotics applications. Three studies are presented. The first two use ARTMAP to create maps from remotely sensed data, while the third uses an ARTMAP system for sensor fusion on a mobile robot. The first study uses ARTMAP to predict vegetation mixtures in the Plumas National Forest based on spectral data from the Landsat Thematic Mapper satellite. While most previous ARTMAP systems have predicted discrete output classes, this project develops new capabilities for multi-valued prediction. On the mixture prediction task, the new network is shown to perform better than maximum likelihood and linear mixture models. The second remote sensing study uses an ARTMAP classification system to evaluate the relative importance of spectral and terrain data for map-making. This project has produced a large-scale map of remotely sensed vegetation in the Sierra National Forest. Network predictions are validated with ground truth data, and maps produced using the ARTMAP system are compared to a map produced by human experts. The ARTMAP Sierra map was generated in an afternoon, while the labor intensive expert method required nearly a year to perform the same task. The robotics research uses an ARTMAP system to integrate visual information and ultrasonic sensory information on a B14 mobile robot. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. ARTMAP effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.
Fuzzy Behavior Modulation with Threshold Activation for Autonomous Vehicle Navigation
NASA Technical Reports Server (NTRS)
Tunstel, Edward
2000-01-01
This paper describes fuzzy logic techniques used in a hierarchical behavior-based architecture for robot navigation. An architectural feature for threshold activation of fuzzy-behaviors is emphasized, which is potentially useful for tuning navigation performance in real world applications. The target application is autonomous local navigation of a small planetary rover. Threshold activation of low-level navigation behaviors is the primary focus. A preliminary assessment of its impact on local navigation performance is provided based on computer simulations.
Lee, Kit-Hang; Fu, Denny K.C.; Leong, Martin C.W.; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong
2017-01-01
Abstract Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments. PMID:29251567
Lee, Kit-Hang; Fu, Denny K C; Leong, Martin C W; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong; Kwok, Ka-Wai
2017-12-01
Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments.
Optimizing Aerobot Exploration of Venus
NASA Astrophysics Data System (ADS)
Ford, Kevin S.
1997-03-01
Venus Flyer Robot (VFR) is an aerobot; an autonomous balloon probe designed for remote exploration of Earth's sister planet in 2003. VFR's simple navigation and control system permits travel to virtually any location on Venus, but it can survive for only a limited duration in the harsh Venusian environment. To help address this limitation, we develop: (1) a global circulation model that captures the most important characteristics of the Venusian atmosphere; (2) a simple aerobot model that captures thermal restrictions faced by VFR at Venus; and (3) one exact and two heuristic algorithms that, using abstractions (1) and (2), construct routes making the best use of VFR's limited lifetime. We demonstrate this modeling by planning several small example missions and a prototypical mission that explores numerous interesting sites recently documented in the plane tary geology literature.
Optimizing Aerobot Exploration of Venus
NASA Technical Reports Server (NTRS)
Ford, Kevin S.
1997-01-01
Venus Flyer Robot (VFR) is an aerobot; an autonomous balloon probe designed for remote exploration of Earth's sister planet in 2003. VFR's simple navigation and control system permits travel to virtually any location on Venus, but it can survive for only a limited duration in the harsh Venusian environment. To help address this limitation, we develop: (1) a global circulation model that captures the most important characteristics of the Venusian atmosphere; (2) a simple aerobot model that captures thermal restrictions faced by VFR at Venus; and (3) one exact and two heuristic algorithms that, using abstractions (1) and (2), construct routes making the best use of VFR's limited lifetime. We demonstrate this modeling by planning several small example missions and a prototypical mission that explores numerous interesting sites recently documented in the plane tary geology literature.
NASA Johnson Space Center: Mini AERCam Testing with GSS6560
NASA Technical Reports Server (NTRS)
Cryant, Scott P.
2004-01-01
This slide presentation reviews the testing of the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) with the GPS/SBAS simulation system, GSS6560. There is a listing of several GPS based programs at NASA Johnson, including the testing of Shuttle testing of the GPS system. Including information about Space Integrated GPS/INS (SIGI) testing. There is also information about the standalone ISS SIGI test,and testing of the SIGI for the Crew Return Vehicle. The Mini AERCam is a small, free-flying camera for remote inspections of the ISS, it uses precise relative navigation with differential carrier phase GPS to provide situational awareness to operators. The closed loop orbital testing with and without the use of the GSS6550 system of the Mini AERCam system is reviewed.
Bhatia, Parisha; Mohamed, Hossam Eldin; Kadi, Abida; Walvekar, Rohan R.
2015-01-01
Robot assisted thyroid surgery has been the latest advance in the evolution of thyroid surgery after endoscopy assisted procedures. The advantage of a superior field vision and technical advancements of robotic technology have permitted novel remote access (trans-axillary and retro-auricular) surgical approaches. Interestingly, several remote access surgical ports using robot surgical system and endoscopic technique have been customized to avoid the social stigma of a visible scar. Current literature has displayed their various advantages in terms of post-operative outcomes; however, the associated financial burden and also additional training and expertise necessary hinder its widespread adoption into endocrine surgery practices. These approaches offer excellent cosmesis, with a shorter learning curve and reduce discomfort to surgeons operating ergonomically through a robotic console. This review aims to provide details of various remote access techniques that are being offered for thyroid resection. Though these have been reported to be safe and feasible approaches for thyroid surgery, further evaluation for their efficacy still remains. PMID:26425450
A spatial registration method for navigation system combining O-arm with spinal surgery robot
NASA Astrophysics Data System (ADS)
Bai, H.; Song, G. L.; Zhao, Y. W.; Liu, X. Z.; Jiang, Y. X.
2018-05-01
The minimally invasive surgery in spinal surgery has become increasingly popular in recent years as it reduces the chances of complications during post-operation. However, the procedure of spinal surgery is complicated and the surgical vision of minimally invasive surgery is limited. In order to increase the quality of percutaneous pedicle screw placement, the O-arm that is a mobile intraoperative imaging system is used to assist surgery. The robot navigation system combined with O-arm is also increasing, with the extensive use of O-arm. One of the major problems in the surgical navigation system is to associate the patient space with the intra-operation image space. This study proposes a spatial registration method of spinal surgical robot navigation system, which uses the O-arm to scan a calibration phantom with metal calibration spheres. First, the metal artifacts were reduced in the CT slices and then the circles in the images based on the moments invariant could be identified. Further, the position of the calibration sphere in the image space was obtained. Moreover, the registration matrix is obtained based on the ICP algorithm. Finally, the position error is calculated to verify the feasibility and accuracy of the registration method.
Service Oriented Robotic Architecture for Space Robotics: Design, Testing, and Lessons Learned
NASA Technical Reports Server (NTRS)
Fluckiger, Lorenzo Jean Marc E; Utz, Hans Heinrich
2013-01-01
This paper presents the lessons learned from six years of experiments with planetary rover prototypes running the Service Oriented Robotic Architecture (SORA) developed by the Intelligent Robotics Group (IRG) at the NASA Ames Research Center. SORA relies on proven software engineering methods and technologies applied to space robotics. Based on a Service Oriented Architecture and robust middleware, SORA encompasses on-board robot control and a full suite of software tools necessary for remotely operated exploration missions. SORA has been eld tested in numerous scenarios of robotic lunar and planetary exploration. The experiments conducted by IRG with SORA exercise a large set of the constraints encountered in space applications: remote robotic assets, ight relevant science instruments, distributed operations, high network latencies and unreliable or intermittent communication links. In this paper, we present the results of these eld tests in regard to the developed architecture, and discuss its bene ts and limitations.
NASA Technical Reports Server (NTRS)
Burns, Richard D. (Inventor); Cepollina, Frank J. (Inventor); Jedhrich, Nicholas M. (Inventor); Holz, Jill M. (Inventor); Corbo, James E. (Inventor)
2008-01-01
This invention is a method and supporting apparatus for autonomously capturing, servicing and de-orbiting a free-flying spacecraft, such as a satellite, using robotics. The capture of the spacecraft includes the steps of optically seeking and ranging the satellite using LIDAR; and matching tumble rates, rendezvousing and berthing with the satellite. Servicing of the spacecraft may be done using supervised autonomy, which is allowing a robot to execute a sequence of instructions without intervention from a remote human-occupied location. These instructions may be packaged at the remote station in a script and uplinked to the robot for execution upon remote command giving authority to proceed. Alternately, the instructions may be generated by Artificial Intelligence (AI) logic onboard the robot. In either case, the remote operator maintains the ability to abort an instruction or script at any time, as well as the ability to intervene using manual override to teleoperate the robot.In one embodiment, a vehicle used for carrying out the method of this invention comprises an ejection module, which includes the robot, and a de-orbit module. Once servicing is completed by the robot, the ejection module separates from the de-orbit module, leaving the de-orbit module attached to the satellite for de-orbiting the same at a future time. Upon separation, the ejection module can either de-orbit itself or rendezvous with another satellite for servicing. The ability to de-orbit a spacecraft further allows the opportunity to direct the landing of the spent satellite in a safe location away from population centers, such as the ocean.
NASA Technical Reports Server (NTRS)
Burns, Richard D. (Inventor); Jedhrich, Nicholas M. (Inventor); Cepollina, Frank J. (Inventor); Holz, Jill M. (Inventor); Corbo, James E. (Inventor)
2007-01-01
This invention is a method and supporting apparatus for autonomously capturing, servicing and de-orbiting a free-flying spacecraft, such as a satellite, using robotics. The capture of the spacecraft includes the steps of optically seeking and ranging the satellite using LIDAR; and matching tumble rates, rendezvousing and berthing with the satellite. Servicing of the spacecraft may be done using supervised autonomy, which is allowing a robot to execute a sequence of instructions without intervention from a remote human-occupied location. These instructions may be packaged at the remote station in a script and uplinked to the robot for execution upon remote command giving authority to proceed. Alternately, the instructions may be generated by Artificial Intelligence (AI) logic onboard the robot. In either case, the remote operator maintains the ability to abort an instruction or script at any time, as well as the ability to intervene using manual override to teleoperate the robot.In one embodiment, a vehicle used for carrying out the method of this invention comprises an ejection module, which includes the robot, and a de-orbit module. Once servicing is completed by the robot, the ejection module separates from the de-orbit module, leaving the de-orbit module attached to the satellite for de-orbiting the same at a future time. Upon separation, the ejection module can either de-orbit itself or rendezvous with another satellite for servicing. The ability to de-orbit a spacecraft further allows the opportunity to direct the landing of the spent satellite in a safe location away from population centers, such as the ocean.
NASA Technical Reports Server (NTRS)
Holz, Jill M. (Inventor); Corbo, James E. (Inventor); Burns, Richard D. (Inventor); Cepollina, Frank J. (Inventor); Jedhrich, Nicholas M. (Inventor)
2009-01-01
This invention is a method and supporting apparatus for autonomously capturing, servicing and de-orbiting a free-flying spacecraft, such as a satellite, using robotics. The capture of the spacecraft includes the steps of optically seeking and ranging the satellite using LIDAR; and matching tumble rates, rendezvousing and berthing with the satellite. Servicing of the spacecraft may be done using supervised autonomy, which is allowing a robot to execute a sequence of instructions without intervention from a remote human-occupied location. These instructions may be packaged at the remote station in a script and uplinked to the robot for execution upon remote command giving authority to proceed. Alternately, the instructions may be generated by Artificial Intelligence (AI) logic onboard the robot. In either case, the remote operator maintains the ability to abort an instruction or script at any time, as well as the ability to intervene using manual override to teleoperate the robot.In one embodiment, a vehicle used for carrying out the method of this invention comprises an ejection module, which includes the robot, and a de-orbit module. Once servicing is completed by the robot, the ejection module separates from the de-orbit module, leaving the de-orbit module attached to the satellite for de-orbiting the same at a future time. Upon separation, the ejection module can either de-orbit itself or rendezvous with another satellite for servicing. The ability to de-orbit a spacecraft further allows the opportunity to direct the landing of the spent satellite in a safe location away from population centers, such as the ocean.
NASA Technical Reports Server (NTRS)
Burns, Richard D. (Inventor); Cepollina, Frank J. (Inventor); Jedhrich, Nicholas M. (Inventor); Holz, Jill M. (Inventor); Corbo, James E. (Inventor)
2007-01-01
This invention is a method and supporting apparatus for autonomously capturing, servicing and de-orbiting a free-flying spacecraft, such as a satellite, using robotics. The capture of the spacecraft includes the steps of optically seeking and ranging the satellite using LIDAR; and matching tumble rates, rendezvousing and berthing with the satellite. Servicing of the spacecraft may be done using supervised autonomy, which is allowing a robot to execute a sequence of instructions without intervention from a remote human-occupied location. These instructions may be packaged at the remote station in a script and uplinked to the robot for execution upon remote command giving authority to proceed. Alternately, the instructions may be generated by Artificial Intelligence (AI) logic onboard the robot. In either case, the remote operator maintains the ability to abort an instruction or script at any time, as well as the ability to intervene using manual override to teleoperate the robot.In one embodiment, a vehicle used for carrying out the method of this invention comprises an ejection module, which includes the robot, and a de-orbit module. Once servicing is completed by the robot, the ejection module separates from the de-orbit module, leaving the de-orbit module attached to the satellite for de-orbiting the same at a future time. Upon separation, the ejection module can either de-orbit itself or rendezvous with another satellite for servicing. The ability to de-orbit a spacecraft further allows the opportunity to direct the landing of the spent satellite in a safe location away from population centers, such as the ocean.
Reactive navigation in extremely dense and highly intricate environments
2017-01-01
Reactive navigation is a well-known paradigm for controlling an autonomous mobile robot, which suggests making all control decisions through some light processing of the current/recent sensor data. Among the many advantages of this paradigm are: 1) the possibility to apply it to robots with limited and low-priced hardware resources, and 2) the fact of being able to safely navigate a robot in completely unknown environments containing unpredictable moving obstacles. As a major disadvantage, nevertheless, the reactive paradigm may occasionally cause robots to get trapped in certain areas of the environment—typically, these conflicting areas have a large concave shape and/or are full of closely-spaced obstacles. In this last respect, an enormous effort has been devoted to overcome such a serious drawback during the last two decades. As a result of this effort, a substantial number of new approaches for reactive navigation have been put forward. Some of these approaches have clearly improved the way how a reactively-controlled robot can move among densely cluttered obstacles; some other approaches have essentially focused on increasing the variety of obstacle shapes and sizes that could be successfully circumnavigated; etc. In this paper, as a starting point, we choose the best existing reactive approach to move in densely cluttered environments, and we also choose the existing reactive approach with the greatest ability to circumvent large intricate-shaped obstacles. Then, we combine these two approaches in a way that makes the most of them. From the experimental point of view, we use both simulated and real scenarios of challenging complexity for testing purposes. In such scenarios, we demonstrate that the combined approach herein proposed clearly outperforms the two individual approaches on which it is built. PMID:29287078
A Remote Lab for Experiments with a Team of Mobile Robots
Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio
2014-01-01
In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab. PMID:25192316
A remote lab for experiments with a team of mobile robots.
Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio
2014-09-04
In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab.
A motorized ultrasound system for MRI-ultrasound fusion guided prostatectomy
NASA Astrophysics Data System (ADS)
Seifabadi, Reza; Xu, Sheng; Pinto, Peter; Wood, Bradford J.
2016-03-01
Purpose: This study presents MoTRUS, a motorized transrectal ultrasound system, to enable remote navigation of a transrectal ultrasound (TRUS) probe during da Vinci assisted prostatectomy. MoTRUS not only provides a stable platform to the ultrasound probe, but also allows the physician to navigate it remotely while sitting on the da Vinci console. This study also presents phantom feasibility study with the goal being intraoperative MRI-US image fusion capability to bring preoperative MR images to the operating room for the best visualization of the gland, boundaries, nerves, etc. Method: A two degree-of-freedom probe holder is developed to insert and rotate a bi-plane transrectal ultrasound transducer. A custom joystick is made to enable remote navigation of MoTRUS. Safety features have been considered to avoid inadvertent risks (if any) to the patient. Custom design software has been developed to fuse pre-operative MR images to intraoperative ultrasound images acquired by MoTRUS. Results: Remote TRUS probe navigation was evaluated on a patient after taking required consents during prostatectomy using MoTRUS. It took 10 min to setup the system in OR. MoTRUS provided similar capability in addition to remote navigation and stable imaging. No complications were observed. Image fusion was evaluated on a commercial prostate phantom. Electromagnetic tracking was used for the fusion. Conclusions: Motorized navigation of the TRUS probe during prostatectomy is safe and feasible. Remote navigation provides physician with a more precise and easier control of the ultrasound image while removing the burden of manual manipulation of the probe. Image fusion improved visualization of the prostate and boundaries in a phantom study.
AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation
Yuan, Xin; Martínez-Ortega, José-Fernán; Fernández, José Antonio Sánchez; Eckert, Martina
2017-01-01
In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM) applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF) are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure. PMID:28531135
Sun, Zhen-Jun; Ye, Bo; Sun, Yi; Zhang, Hong-Hai; Liu, Sheng
2014-07-01
This article describes a novel magnetically maneuverable capsule endoscope system with direction reference for image navigation. This direction reference was employed by utilizing a specific magnet configuration between a pair of external permanent magnets and a magnetic shell coated on the external capsule endoscope surface. A pair of customized Cartesian robots, each with only 4 degrees of freedom, was built to hold the external permanent magnets as their end-effectors. These robots, together with their external permanent magnets, were placed on two opposite sides of a "patient bed." Because of the optimized configuration based on magnetic analysis between the external permanent magnets and the magnetic shell, a simplified control strategy was proposed, and only two parameters, yaw step angle and moving step, were necessary for the employed robotic system. Step-by-step experiments demonstrated that the proposed system is capable of magnetically maneuvering the capsule endoscope while providing direction reference for image navigation. © IMechE 2014.
Passive mapping and intermittent exploration for mobile robots
NASA Technical Reports Server (NTRS)
Engleson, Sean P.
1994-01-01
An adaptive state space architecture is combined with diktiometric representation to provide the framework for designing a robot mapping system with flexible navigation planning tasks. This involves indexing waypoints described as expectations, geometric indexing, and perceptual indexing. Matching and updating the robot's projected position and sensory inputs with indexing waypoints involves matchers, dynamic priorities, transients, and waypoint restructuring. The robot's map learning can be opganized around the principles of passive mapping.
RTML: remote telescope markup language and you
NASA Astrophysics Data System (ADS)
Hessman, F. V.
2001-12-01
In order to coordinate the use of robotic and remotely operated telescopes in networks -- like Göttingen's MOnitoring NEtwork of Telescopes (MONET) -- a standard format for the exchange of observing requests and reports is needed. I describe the benefits of Remote Telescope Markup Language (RTML), an XML-based protocol originally developed by the Hands-On Universe Project, which is being used and further developed by several robotic telescope projects and firms.
Interaction dynamics of multiple autonomous mobile robots in bounded spatial domains
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.
Quantifying Traversability of Terrain for a Mobile Robot
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Seraji, Homayoun; Werger, Barry
2005-01-01
A document presents an updated discussion on a method of autonomous navigation for a robotic vehicle navigating across rough terrain. The method involves, among other things, the use of a measure of traversability, denoted the fuzzy traversability index, which embodies the information about the slope and roughness of terrain obtained from analysis of images acquired by cameras mounted on the robot. The improvements presented in the report focus on the use of the fuzzy traversability index to generate a traversability map and a grid map for planning the safest path for the robot. Once grid traversability values have been computed, they are utilized for rejecting unsafe path segments and for computing a traversalcost function for ranking candidate paths, selected by a search algorithm, from a specified initial position to a specified final position. The output of the algorithm is a set of waypoints designating a path having a minimal-traversal cost.
López, Elena; García, Sergio; Barea, Rafael; Bergasa, Luis M.; Molinos, Eduardo J.; Arroyo, Roberto; Romera, Eduardo; Pardo, Samuel
2017-01-01
One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control. PMID:28397758
The JPL Serpentine Robot: A 12 DOF System for Inspection
NASA Technical Reports Server (NTRS)
Paljug, E.; Ohm, T.; Hayati, S.
1995-01-01
The Serpentine Robot is a prototype hyper-redundant (snake-like) manipulator system developed at the Jet Propulsion Laboratory. It is designed to navigate and perform tasks in obstructed and constrained environments in which conventional 6 DOF manipulators cannot function. Described are the robot mechanical design, a joint assembly low level inverse kinematic algorithm, control development, and applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czejdo, Bogdan; Bhattacharya, Sambit; Ferragut, Erik M
2012-01-01
This paper describes the syntax and semantics of multi-level state diagrams to support probabilistic behavior of cooperating robots. The techniques are presented to analyze these diagrams by querying combined robots behaviors. It is shown how to use state abstraction and transition abstraction to create, verify and process large probabilistic state diagrams.
Robotic vehicle with multiple tracked mobility platforms
Salton, Jonathan R [Albuquerque, NM; Buttz, James H [Albuquerque, NM; Garretson, Justin [Albuquerque, NM; Hayward, David R [Wetmore, CO; Hobart, Clinton G [Albuquerque, NM; Deuel, Jr., Jamieson K.
2012-07-24
A robotic vehicle having two or more tracked mobility platforms that are mechanically linked together with a two-dimensional coupling, thereby forming a composite vehicle of increased mobility. The robotic vehicle is operative in hazardous environments and can be capable of semi-submersible operation. The robotic vehicle is capable of remote controlled operation via radio frequency and/or fiber optic communication link to a remote operator control unit. The tracks have a plurality of track-edge scallop cut-outs that allow the tracks to easily grab onto and roll across railroad tracks, especially when crossing the railroad tracks at an oblique angle.
Huang, Meng; Barber, Sean Michael; Steele, William James; Boghani, Zain; Desai, Viren Rajendrakumar; Britz, Gavin Wayne; West, George Alexander; Trask, Todd Wilson; Holman, Paul Joseph
2018-06-01
Image-guided approaches to spinal instrumentation and interbody fusion have been widely popularized in the last decade [1-5]. Navigated pedicle screws are significantly less likely to breach [2, 3, 5, 6]. Navigation otherwise remains a point reference tool because the projection is off-axis to the surgeon's inline loupe or microscope view. The Synaptive robotic brightmatter drive videoexoscope monitor system represents a new paradigm for off-axis high-definition (HD) surgical visualization. It has many advantages over the traditional microscope and loupes, which have already been demonstrated in a cadaveric study [7]. An auxiliary, but powerful capability of this system is projection of a second, modifiable image in a split-screen configuration. We hypothesized that integration of both Medtronic and Synaptive platforms could permit the visualization of reconstructed navigation and surgical field images simultaneously. By utilizing navigated instruments, this configuration has the ability to support live image-guided surgery or real-time navigation (RTN). Medtronic O-arm/Stealth S7 navigation, MetRx, NavLock, and SureTrak spinal systems were implemented on a prone cadaveric specimen with a stream output to the Synaptive Display. Surgical visualization was provided using a Storz Image S1 platform and camera mounted to the Synaptive robotic brightmatter drive. We were able to successfully technically co-adapt both platforms. A minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) and an open pedicle subtraction osteotomy (PSO) were performed using a navigated high-speed drill under RTN. Disc Shaver and Trials under RTN were implemented on the MIS TLIF. The synergy of Synaptive HD videoexoscope robotic drive and Medtronic Stealth platforms allow for live image-guided surgery or real-time navigation (RTN). Off-axis projection also allows upright neutral cervical spine operative ergonomics for the surgeons and improved surgical team visualization and education compared to traditional means. This technique has the potential to augment existing minimally invasive and open approaches, but will require long-term outcome measurements for efficacy.
NASA Astrophysics Data System (ADS)
Zheng, Li; Yi, Ruan
2009-11-01
Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.
Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue
NASA Technical Reports Server (NTRS)
Zornetzer, Steve; Gage, Douglas
2005-01-01
Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.
Surgical Robotics Research in Cardiovascular Disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pohost, Gerald M; Guthrie, Barton L; Steiner, Charles
This grant is to support a research in robotics at three major medical centers: the University of Southern California-USC- (Project 1); the University of Alabama at Birmingham-UAB-(Project 2); and the Cleveland Clinic Foundation-CCF-(Project 3). Project 1 is oriented toward cardiovascular applications, while projects 2 and 3 are oriented toward neurosurgical applications. The main objective of Project 1 is to develop an approach to assist patients in maintaining a constant level of stress while undergoing magnetic resonance imaging or spectroscopy. The specific project is to use handgrip to detect the changes in high energy phosphate metabolism between rest and stress. Themore » high energy phosphates, ATP and phosphocreatine (PCr) are responsible for the energy of the heart muscle (myocardium) responsible for its contractile function. If the blood supply to the myocardium in insufficient to support metabolism and contractility during stress, the high energy phosphates, particularly PCr, will decrease in concentration. The high energy phosphates can be tracked using phosphorus-31 magnetic resonance spectroscopy ({sup 31}P MRS). In Project 2 the UAB Surgical Robotics project focuses on the use of virtual presence to assist with remote surgery and surgical training. The goal of this proposal was to assemble a pilot system for proof of concept. The pilot project was completed successfully and was judged to demonstrate that the concept of remote surgical assistance as applied to surgery and surgical training was feasible and warranted further development. The main objective of Project 3 is to develop a system to allow for the tele-robotic delivery of instrumentation during a functional neurosurgical procedure (Figure 3). Instrumentation such as micro-electrical recording probes or deep brain stimulation leads. Current methods for the delivery of these instruments involve the integration of linear actuators to stereotactic navigation systems. The control of these delivery devices utilizes an open-loop configuration involving a team consisting of neurosurgeon, neurologist and neurophysiologist all present and participating in the decision process of delivery. We propose the development of an integrated system which provides for distributed decision making and tele-manipulation of the instrument delivery system.« less
Ultra Wide-Band Localization and SLAM: A Comparative Study for Mobile Robot Navigation
Segura, Marcelo J.; Auat Cheein, Fernando A.; Toibero, Juan M.; Mut, Vicente; Carelli, Ricardo
2011-01-01
In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work. PMID:22319397
Robotically assisted velocity-sensitive triggered focused ultrasound surgery
NASA Astrophysics Data System (ADS)
Maier, Florian; Brunner, Alexander; Jenne, Jürgen W.; Krafft, Axel J.; Semmler, Wolfhard; Bock, Michael
2012-11-01
Magnetic Resonance (MR) guided Focused Ultrasound Surgery (FUS) of abdominal organs is challenging due to breathing motion and limited patient access in the MR environment. In this work, an experimental robotically assisted FUS setup was combined with a MR-based navigator technique to realize motion-compensated sonications and online temperature imaging. Experiments were carried out in a static phantom, during periodic manual motion of the phantom without triggering, and with triggering to evaluate the triggering method. In contrast to the non-triggered sonication, the results of the triggered sonication show a confined symmetric temperature distribution. In conclusion, the velocity sensitive navigator can be employed for triggered FUS to compensate for periodic motion. Combined with the robotic FUS setup, flexible treatment of abdominal targets might be realized.
NASA Astrophysics Data System (ADS)
Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar
2017-02-01
In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.
The Evolution of Computer-Assisted Total Hip Arthroplasty and Relevant Applications
Kim, In-Sung; Bhardwaj, Atul M.; Badami, Ramachandra N.
2017-01-01
In total hip arthroplasty (THA), the accurate positioning of implants is the key to achieve a good clinical outcome. Computer-assisted orthopaedic surgery (CAOS) has been developed for more accurate positioning of implants during the THA. There are passive, semi-active, and active systems in CAOS for THA. Navigation is a passive system that only provides information and guidance to the surgeon. There are 3 types of navigation: imageless navigation, computed tomography (CT)-based navigation, and fluoroscopy-based navigation. In imageless navigation system, a new method of registration without the need to register the anterior pelvic plane was introduced. CT-based navigation can be efficiently used for pelvic plane reference, the functional pelvic plane in supine which adjusts anterior pelvic plane sagittal tilt for targeting the cup orientation. Robot-assisted system can be either active or semi-active. The active robotic system performs the preparation for implant positioning as programmed preoperatively. It has been used for only femoral implant cavity preparation. Recently, program for cup positioning was additionally developed. Alternatively, for ease of surgeon acceptance, semi-active robot systems are developed. It was initially applied only for cup positioning. However, with the development of enhanced femoral workflows, this system can now be used to position both cup and stem. Though there have been substantial advancements in computer-assisted THA, its use can still be controversial at present due to the steep learning curve, intraoperative technical issues, high cost and etc. However, in the future, CAOS will certainly enable the surgeon to operate more accurately and lead to improved outcomes in THA as the technology continues to evolve rapidly. PMID:28316957
Teleoperator/robot technology can help solve biomedical problems
NASA Technical Reports Server (NTRS)
Heer, E.; Bejczy, A. K.
1975-01-01
Teleoperator and robot technology appears to offer the possibility to apply these techniques to the benefit for the severely handicapped giving them greater self reliance and independence. Major problem areas in the development of prostheses and remotely controlled devices for the handicapped are briefly discussed, and the parallelism with problems in the development of teleoperator/robots identified. A brief description of specific ongoing and projected developments in the area of remotely controlled devices (wheelchairs and manipulators) is provided.
Bildbasierte Navigation eines mobilen Roboters mittels omnidirektionaler und schwenkbarer Kamera
NASA Astrophysics Data System (ADS)
Nierobisch, Thomas; Hoffmann, Frank; Krettek, Johannes; Bertram, Torsten
Dieser Beitrag präsentiert einen neuartigen Ansatz zur entkoppelten Regelung der Kamera-Blickrichtung und der Bewegung eines mobilen Roboters im Kontext der bildbasierten Navigation. Eine schwenkbare monokulare Kamera hält unabhängig von der Roboterbewegung die relevanten Merkmale für die Navigation im Sichtfeld. Die Entkopplung der Kamerablickrichtung von der eigentlichen Roboterbewegung wird durch die Projektion der Merkmale auf eine virtuelle Bildebene realisiert. In der virtuellen Bildebene hängt die Ausprägung der visuellen Merkmale für die bildbasierte Regelung nur von der Roboterposition ab und ist unabhängig gegenüber der tatsächlichen Blickrichtung der Kamera. Durch die Schwenkbarkeit der monokularen Kamera wird der Arbeitsbereich, über dem sich ein Referenzbild zur bildbasierten Regelung eignet, gegenüber einer statischen Kamera signifikant vergrößert. Dies ermöglicht die Navigation auch in texturarmen Umgebungen, die wenig verwertbare Textur- und Strukturmerkmale aufweisen.
Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie
2003-01-01
A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.
Mobile Robot Navigation and Obstacle Avoidance in Unstructured Outdoor Environments
2017-12-01
to pull information from the network, it subscribes to a specific topic and is able to receive the messages that are published to that topic. In order...total artificial potential field is characterized “as the sum of an attractive potential pulling the robot toward the goal…and a repulsive potential...of robot laser_max = 20; % robot laser view horizon goaldist = 0.5; % distance metric for reaching goal goali = 1
A Concept of the Differentially Driven Three Wheeled Robot
NASA Astrophysics Data System (ADS)
Kelemen, M.; Colville, D. J.; Kelemenová, T.; Virgala, I.; Miková, L.
2013-08-01
The paper deals with the concept of a differentially driven three wheeled robot. The main task for the robot is to follow the navigation black line on white ground. The robot also contains anti-collision sensors for avoiding obstacles on track. Students learn how to deal with signals from sensors and how to control DC motors. Students work with the controller and develop the locomotion algorithm and can attend a competition
Dual benefit robotics programs at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A.T.
Sandia National Laboratories has one of the largest integrated robotics laboratories in the United States. Projects include research, development, and application of one-of-a-kind systems, primarily for the Department of Energy (DOE) complex. This work has been underway for more than 10 years. It began with on-site activities that required remote operation, such as reactor and nuclear waste handling. Special purpose robot systems were developed using existing commercial manipulators and fixtures and programs designed in-house. These systems were used in applications such as servicing the Sandia pulsed reactor and inspecting remote roof bolts in an underground radioactive waste disposal facility. Inmore » the beginning, robotics was a small effort, but with increasing attention to the use of robots for hazardous operations, efforts now involve a staff of more than 100 people working in a broad robotics research, development, and applications program that has access to more than 30 robotics systems.« less
Terrain classification in navigation of an autonomous mobile robot
NASA Astrophysics Data System (ADS)
Dodds, David R.
1991-03-01
In this paper we describe a method of path planning that integrates terrain classification (by means of fractals) the certainty grid method of spatial representation Kehtarnavaz Griswold collision-zones Dubois Prade fuzzy temporal and spatial knowledge and non-point sized qualitative navigational planning. An initially planned (" end-to-end" ) path is piece-wise modified to accommodate known and inferred moving obstacles and includes attention to time-varying multiple subgoals which may influence a section of path at a time after the robot has begun traversing that planned path.
NASA Astrophysics Data System (ADS)
Iwatsuki, Masami; Kato, Yoriyuki; Yonekawa, Akira
State-of-the-art Internet technologies allow us to provide advanced and interactive distance education services. However, we could not help but gather students for experiments and exercises in an education for engineering because large-scale equipments and expensive software are required. On the other hand, teleoperation systems with robot manipulator or vehicle via Internet have been developed in the field of robotics. By fusing these two techniques, we can realize remote experiment and exercise systems for the engineering education based on World Wide Web. This paper presents how to construct the remote environment that allows students to take courses on experiment and exercise independently of their locations. By using the proposed system, users can exercise and practice remotely about control of a manipulator and a robot vehicle and programming of image processing.
The ACE multi-user web-based Robotic Observatory Control System
NASA Astrophysics Data System (ADS)
Mack, P.
2003-05-01
We have developed an observatory control system that can be operated in interactive, remote or robotic modes. In interactive and remote mode the observer typically acquires the first object then creates a script through a window interface to complete observations for the rest of the night. The system closes early in the event of bad weather. In robotic mode observations are submitted ahead of time through a web-based interface. We present observations made with a 1.0-m telescope using these methods.
A New Simulation Framework for Autonomy in Robotic Missions
NASA Technical Reports Server (NTRS)
Flueckiger, Lorenzo; Neukom, Christian
2003-01-01
Autonomy is a key factor in remote robotic exploration and there is significant activity addressing the application of autonomy to remote robots. It has become increasingly important to have simulation tools available to test the autonomy algorithms. While indus1;rial robotics benefits from a variety of high quality simulation tools, researchers developing autonomous software are still dependent primarily on block-world simulations. The Mission Simulation Facility I(MSF) project addresses this shortcoming with a simulation toolkit that will enable developers of autonomous control systems to test their system s performance against a set of integrated, standardized simulations of NASA mission scenarios. MSF provides a distributed architecture that connects the autonomous system to a set of simulated components replacing the robot hardware and its environment.
Robotic digital subtraction angiography systems within the hybrid operating room.
Murayama, Yuichi; Irie, Koreaki; Saguchi, Takayuki; Ishibashi, Toshihiro; Ebara, Masaki; Nagashima, Hiroyasu; Isoshima, Akira; Arakawa, Hideki; Takao, Hiroyuki; Ohashi, Hiroki; Joki, Tatsuhiro; Kato, Masataka; Tani, Satoshi; Ikeuchi, Satoshi; Abe, Toshiaki
2011-05-01
Fully equipped high-end digital subtraction angiography (DSA) within the operating room (OR) environment has emerged as a new trend in the fields of neurosurgery and vascular surgery. To describe initial clinical experience with a robotic DSA system in the hybrid OR. A newly designed robotic DSA system (Artis zeego; Siemens AG, Forchheim, Germany) was installed in the hybrid OR. The system consists of a multiaxis robotic C arm and surgical OR table. In addition to conventional neuroendovascular procedures, the system was used as an intraoperative imaging tool for various neurosurgical procedures such as aneurysm clipping and spine instrumentation. Five hundred one neurosurgical procedures were successfully conducted in the hybrid OR with the robotic DSA. During surgical procedures such as aneurysm clipping and arteriovenous fistula treatment, intraoperative 2-/3-dimensional angiography and C-arm-based computed tomographic images (DynaCT) were easily performed without moving the OR table. Newly developed virtual navigation software (syngo iGuide; Siemens AG) can be used in frameless navigation and in access to deep-seated intracranial lesions or needle placement. This newly developed robotic DSA system provides safe and precise treatment in the fields of endovascular treatment and neurosurgery.
Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots
Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro
2013-01-01
This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933
Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.
Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro
2013-10-21
This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.
The Leipzig experience with robotic valve surgery.
Autschbach, R; Onnasch, J F; Falk, V; Walther, T; Krüger, M; Schilling, L O; Mohr, F W
2000-01-01
The study describes the single-center experience using robot-assisted videoscopic mitral valve surgery and the early results with a remote telemanipulator-assisted approach for mitral valve repair. Out of a series of 230 patients who underwent minimally invasive mitral valve surgery, in 167 patients surgery was performed with the use of robotic assistance. A voice-controlled robotic arm was used for videoscopic guidance in 152 cases. Most recently, a computer-enhanced telemanipulator was used in 15 patients to perform the operation remotely. The mitral valve was repaired in 117 and replaced in all other patients. The voice-controlled robotic arm (AESOP 3000) facilitated videoscopic-assisted mitral valve surgery. The procedure was completed without the need for an additional assistant as "solo surgery." Additional procedures like radiofrequency ablation and tricuspid valve repair were performed in 21 and 4 patients, respectively. Duration of bypass and clamp time was comparable to conventional procedures (107 A 34 and 50 A 16 min, respectively). Hospital mortality was 1.2%. Using the da Vinci telemanipulation system, remote mitral valve repair was successfully performed in 13 of 15 patients. Robotic-assisted less invasive mitral valve surgery has evolved to a reliable technique with reproducible results for primary operations and for reoperations. Robotic assistance has enabled a solo surgery approach. The combination with radiofrequency ablation (Mini Maze) in patients with chronic atrial fibrillation has proven to be beneficial. The use of telemanipulation systems for remote mitral valve surgery is promising, but a number of problems have to be solved before the introduction of a closed chest mitral valve procedure.
Crawling Robots on Large Web in Rocket Experiment on Furoshiki Deployment
NASA Astrophysics Data System (ADS)
Kaya, N.; Iwashita, M.; Nakasuka, S.; Summerer, L.; Mankins, J.
It is one of the most important and critical issues to develop a technology to construct space huge transmitting antenna such as the Solar Power Satellite. The huge antenna have many useful applications in space, for example, telecommunication antennas for cellular phones, radars for remote sensing, navigation and observation, and so on. We are proposing to apply the Furoshiki satellite with robots to construct the huge structures. After a large web is deployed using the Furoshiki satellite in the same size of the huge antenna, all of the antenna elements crawl on the web with their own legs toward their allocated locations in order to realize a huge antenna. The micro-gravity experiment is planned using a sounding rocket of ISAS in order to demonstrate the feasibility of the deployment of the large web and the phased array performance. Three daughter satellites are being separated from the mother satellite with weak springs, and the daughter satellites deploy the Furoshiki web to a triangular shape at the size of about 20-40m. The dynamics of the daughter satellites and the web is observed by several cameras installed on the mother and daughter satellites during the deployment, while the performance of the phased array antenna using the retrodirective method will simultaneously be measured at the ground station. Finally two micro robots crawl from the mother satellite to the certain points on the web to demonstrate one promising way to construct RF transmitter panels. The robots are internationally being developed by NASA, ESTEC and Kobe University. There are many various ideas for the robots to crawl on the web in the micro-gravity. Each organization is independently developing a different type of the robots. Kobe University is trying to develop wheels to run on the web by pinching the strings of the web. It can successfully run on the web, though the issue is found to tangle the strings.
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures. PMID:25295187
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-01-01
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766
Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning
NASA Astrophysics Data System (ADS)
Kawewong, Aram; Honda, Yutaro; Tsuboyama, Manabu; Hasegawa, Osamu
Robot path-planning is one of the important issues in robotic navigation. This paper presents a novel robot path-planning approach based on the associative memory using Self-Organizing Incremental Neural Networks (SOINN). By the proposed method, an environment is first autonomously divided into a set of path-fragments by junctions. Each fragment is represented by a sequence of preliminarily generated common patterns (CPs). In an online manner, a robot regards the current path as the associative path-fragments, each connected by junctions. The reasoning technique is additionally proposed for decision making at each junction to speed up the exploration time. Distinct from other methods, our method does not ignore the important information about the regions between junctions (path-fragments). The resultant number of path-fragments is also less than other method. Evaluation is done via Webots physical 3D-simulated and real robot experiments, where only distance sensors are available. Results show that our method can represent the environment effectively; it enables the robot to solve the goal-oriented navigation problem in only one episode, which is actually less than that necessary for most of the Reinforcement Learning (RL) based methods. The running time is proved finite and scales well with the environment. The resultant number of path-fragments matches well to the environment.
UROLOGIC ROBOTS AND FUTURE DIRECTIONS
Mozer, Pierre; Troccaz, Jocelyne; Stoianovici, Dan
2009-01-01
Purpose of review Robot-assisted laparoscopic surgery in urology has gained immense popularity with the Da Vinci system but a lot of research teams are working on new robots. The purpose of this paper is to review current urologic robots and present future developments directions. Recent findings Future systems are expected to advance in two directions: improvements of remote manipulation robots and developments of image-guided robots. Summary The final goal of robots is to allow safer and more homogeneous outcomes with less variability of surgeon performance, as well as new tools to perform tasks based on medical transcutaneous imaging, in a less invasive way, at lower costs. It is expected that improvements for remote system could be augmented reality, haptic feed back, size reduction and development of new tools for NOTES surgery. The paradigm of image-guided robots is close to a clinical availability and the most advanced robots are presented with end-user technical assessments. It is also notable that the potential of robots lies much further ahead than the accomplishments of the daVinci system. The integration of imaging with robotics holds a substantial promise, because this can accomplish tasks otherwise impossible. Image guided robots have the potential to offer a paradigm shift. PMID:19057227
Urologic robots and future directions.
Mozer, Pierre; Troccaz, Jocelyne; Stoianovici, Dan
2009-01-01
Robot-assisted laparoscopic surgery in urology has gained immense popularity with the daVinci system, but a lot of research teams are working on new robots. The purpose of this study is to review current urologic robots and present future development directions. Future systems are expected to advance in two directions: improvements of remote manipulation robots and developments of image-guided robots. The final goal of robots is to allow safer and more homogeneous outcomes with less variability of surgeon performance, as well as new tools to perform tasks on the basis of medical transcutaneous imaging, in a less invasive way, at lower costs. It is expected that improvements for a remote system could be augmented in reality, with haptic feedback, size reduction, and development of new tools for natural orifice translumenal endoscopic surgery. The paradigm of image-guided robots is close to clinical availability and the most advanced robots are presented with end-user technical assessments. It is also notable that the potential of robots lies much further ahead than the accomplishments of the daVinci system. The integration of imaging with robotics holds a substantial promise, because this can accomplish tasks otherwise impossible. Image-guided robots have the potential to offer a paradigm shift.
Cavallo, F; Aquilano, M; Bonaccorsi, M; Mannari, I; Carrozza, M C; Dario, P
2011-01-01
This paper aims to show the effectiveness of a (inter / multi)disciplinary team, based on the technology developers, elderly care organizations, and designers, in developing the ASTRO robotic system for domiciliary assistance to elderly people. The main issues presented in this work concern the improvement of robot's behavior by means of a smart sensor network able to share information with the robot for localization and navigation, and the design of the robot's appearance and functionalities by means of a substantial analysis of users' requirements and attitude to robotic technology to improve acceptability and usability.
NASA Astrophysics Data System (ADS)
Murata, Naoya; Katsura, Seiichiro
Acquisition of information about the environment around a mobile robot is important for purposes such as controlling the robot from a remote location and in situations such as that when the robot is running autonomously. In many researches, audiovisual information is used. However, acquisition of information about force sensation, which is included in environmental information, has not been well researched. The mobile-hapto, which is a remote control system with force information, has been proposed, but the robot used for the system can acquire only the horizontal component of forces. For this reason, in this research, a three-wheeled mobile robot that consists of seven actuators was developed and its control system was constructed. It can get information on horizontal and vertical forces without using force sensors. By using this robot, detailed information on the forces in the environment can be acquired and the operability of the robot and its capability to adjust to the environment are expected to improve.
Robotics technology developments in the United States space telerobotics program
NASA Technical Reports Server (NTRS)
Lavery, David
1994-01-01
In the same way that the launch of Yuri Gagarin in April 1961 announced the beginning of human space flight, last year's flight of the German ROTEX robot flight experiment is heralding the start of a new era of space robotics. After a gap of twelve years since the introduction of a new capability in space remote manipulation, ROTEX is the first of at least ten new robotic systems and experiments which will fly before the year 2000. As a result of redefining the development approach for space robotic systems, and capitalizing on opportunities associated with the assembly and maintenance of the space station, the space robotics community is preparing a whole new generation of operational robotic capabilities. Expanding on the capabilities of earlier manipulation systems such as the Viking and Surveyor soil scoops, the Russian Lunakhods, and the Shuttle Remote Manipulator System (RMS), these new space robots will augment astronaut on-orbit capabilities and extend virtual human presence to lunar and planetary surfaces.
NASA Technical Reports Server (NTRS)
Hebert, Paul; Ma, Jeremy; Borders, James; Aydemir, Alper; Bajracharya, Max; Hudson, Nicolas; Shankar, Krishna; Karumanchi, Sisir; Douillard, Bertrand; Burdick, Joel
2015-01-01
The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called "supervised-autonomy" is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a "Supervised Remote Robot with Guided Autonomy and Teleoperation" (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of "behaviors" to chain together sequences of "actions" for the robot to perform which is then executed real time.
Remote-controlled vision-guided mobile robot system
NASA Astrophysics Data System (ADS)
Ande, Raymond; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.
Intelligent Autonomy for Unmanned Surface and Underwater Vehicles
NASA Technical Reports Server (NTRS)
Huntsberger, Terry; Woodward, Gail
2011-01-01
As the Autonomous Underwater Vehicle (AUV) and Autonomous Surface Vehicle (ASV) platforms mature in endurance and reliability, a natural evolution will occur towards longer, more remote autonomous missions. This evolution will require the development of key capabilities that allow these robotic systems to perform a high level of on-board decisionmaking, which would otherwise be performed by humanoperators. With more decision making capabilities, less a priori knowledge of the area of operations would be required, as these systems would be able to sense and adapt to changing environmental conditions, such as unknown topography, currents, obstructions, bays, harbors, islands, and river channels. Existing vehicle sensors would be dual-use; that is they would be utilized for the primary mission, which may be mapping or hydrographic reconnaissance; as well as for autonomous hazard avoidance, route planning, and bathymetric-based navigation. This paper describes a tightly integrated instantiation of an autonomous agent called CARACaS (Control Architecture for Robotic Agent Command and Sensing) developed at JPL (Jet Propulsion Laboratory) that was designed to address many of the issues for survivable ASV/AUV control and to provide adaptive mission capabilities. The results of some on-water tests with US Navy technology test platforms are also presented.
Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments
2016-09-01
yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G
Perception-action map learning in controlled multiscroll systems applied to robot navigation.
Arena, Paolo; De Fiore, Sebastiano; Fortuna, Luigi; Patané, Luca
2008-12-01
In this paper a new technique for action-oriented perception in robots is presented. The paper starts from exploiting the successful implementation of the basic idea that perceptual states can be embedded into chaotic attractors whose dynamical evolution can be associated with sensorial stimuli. In this way, it can be possible to encode, into the chaotic dynamics, environment-dependent patterns. These have to be suitably linked to an action, executed by the robot, to fulfill an assigned mission. This task is addressed here: the action-oriented perception loop is closed by introducing a simple unsupervised learning stage, implemented via a bio-inspired structure based on the motor map paradigm. In this way, perceptual meanings, useful for solving a given task, can be autonomously learned, based on the environment-dependent patterns embedded into the controlled chaotic dynamics. The presented framework has been tested on a simulated robot and the performance have been successfully compared with other traditional navigation control paradigms. Moreover an implementation of the proposed architecture on a Field Programmable Gate Array is briefly outlined and preliminary experimental results on a roving robot are also reported.
UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment.
Chen, Jessie Y C
2010-08-01
A military reconnaissance environment was simulated to examine the performance of ground robotics operators who were instructed to utilise streaming video from an unmanned aerial vehicle (UAV) to navigate his/her ground robot to the locations of the targets. The effects of participants' spatial ability on their performance and workload were also investigated. Results showed that participants' overall performance (speed and accuracy) was better when she/he had access to images from larger UAVs with fixed orientations, compared with other UAV conditions (baseline- no UAV, micro air vehicle and UAV with orbiting views). Participants experienced the highest workload when the UAV was orbiting. Those individuals with higher spatial ability performed significantly better and reported less workload than those with lower spatial ability. The results of the current study will further understanding of ground robot operators' target search performance based on streaming video from UAVs. The results will also facilitate the implementation of ground/air robots in military environments and will be useful to the future military system design and training community.
Open core control software for surgical robots.
Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo
2010-05-01
In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.
The navigation system of the JPL robot
NASA Technical Reports Server (NTRS)
Thompson, A. M.
1977-01-01
The control structure of the JPL research robot and the operations of the navigation subsystem are discussed. The robot functions as a network of interacting concurrent processes distributed among several computers and coordinated by a central executive. The results of scene analysis are used to create a segmented terrain model in which surface regions are classified by traversibility. The model is used by a path planning algorithm, PATH, which uses tree search methods to find the optimal path to a goal. In PATH, the search space is defined dynamically as a consequence of node testing. Maze-solving and the use of an associative data base for context dependent node generation are also discussed. Execution of a planned path is accomplished by a feedback guidance process with automatic error recovery.
Wang, Chao; Savkin, Andrey V; Clout, Ray; Nguyen, Hung T
2015-09-01
We present a novel design of an intelligent robotic hospital bed, named Flexbed, with autonomous navigation ability. The robotic bed is developed for fast and safe transportation of critical neurosurgery patients without changing beds. Flexbed is more efficient and safe during the transportation process comparing to the conventional hospital beds. Flexbed is able to avoid en-route obstacles with an efficient easy-to-implement collision avoidance strategy when an obstacle is nearby and to move towards its destination at maximum speed when there is no threat of collision. We present extensive simulation results of navigation of Flexbed in the crowded hospital corridor environments with moving obstacles. Moreover, results of experiments with Flexbed in the real world scenarios are also presented and discussed.
NASA Astrophysics Data System (ADS)
Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan
2016-05-01
With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team KuuKulgur waits to begin the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
NASA Astrophysics Data System (ADS)
Rhodes, Andrew P.; Christian, John A.; Evans, Thomas
2017-12-01
With the availability and popularity of 3D sensors, it is advantageous to re-examine the use of point cloud descriptors for the purpose of pose estimation and spacecraft relative navigation. One popular descriptor is the oriented unique repeatable clustered viewpoint feature histogram (
Development of a Robotic Colonoscopic Manipulation System, Using Haptic Feedback Algorithm.
Woo, Jaehong; Choi, Jae Hyuk; Seo, Jong Tae; Kim, Tae Il; Yi, Byung Ju
2017-01-01
Colonoscopy is one of the most effective diagnostic and therapeutic tools for colorectal diseases. We aim to propose a master-slave robotic colonoscopy that is controllable in remote site using conventional colonoscopy. The master and slave robot were developed to use conventional flexible colonoscopy. The robotic colonoscopic procedure was performed using a colonoscope training model by one expert endoscopist and two unexperienced engineers. To provide the haptic sensation, the insertion force and the rotating torque were measured and sent to the master robot. A slave robot was developed to hold the colonoscopy and its knob, and perform insertion, rotation, and two tilting motions of colonoscope. A master robot was designed to teach motions of the slave robot. These measured force and torque were scaled down by one tenth to provide the operator with some reflection force and torque at the haptic device. The haptic sensation and feedback system was successful and helpful to feel the constrained force or torque in colon. The insertion time using robotic system decreased with repeated procedures. This work proposed a robotic approach for colonoscopy using haptic feedback algorithm, and this robotic device would effectively perform colonoscopy with reduced burden and comparable safety for patients in remote site.
Estimating time available for sensor fusion exception handling
NASA Astrophysics Data System (ADS)
Murphy, Robin R.; Rogers, Erika
1995-09-01
In previous work, we have developed a generate, test, and debug methodology for detecting, classifying, and responding to sensing failures in autonomous and semi-autonomous mobile robots. An important issue has arisen from these efforts: how much time is there available to classify the cause of the failure and determine an alternative sensing strategy before the robot mission must be terminated? In this paper, we consider the impact of time for teleoperation applications where a remote robot attempts to autonomously maintain sensing in the presence of failures yet has the option to contact the local for further assistance. Time limits are determined by using evidential reasoning with a novel generalization of Dempster-Shafer theory. Generalized Dempster-Shafer theory is used to estimate the time remaining until the robot behavior must be suspended because of uncertainty; this becomes the time limit on autonomous exception handling at the remote. If the remote cannot complete exception handling in this time or needs assistance, responsibility is passed to the local, while the remote assumes a `safe' state. An intelligent assistant then facilitates human intervention, either directing the remote without human assistance or coordinating data collection and presentation to the operator within time limits imposed by the mission. The impact of time on exception handling activities is demonstrated using video camera sensor data.
Eye-in-Hand Manipulation for Remote Handling: Experimental Setup
NASA Astrophysics Data System (ADS)
Niu, Longchuan; Suominen, Olli; Aref, Mohammad M.; Mattila, Jouni; Ruiz, Emilio; Esque, Salvador
2018-03-01
A prototype for eye-in-hand manipulation in the context of remote handling in the International Thermonuclear Experimental Reactor (ITER)1 is presented in this paper. The setup consists of an industrial robot manipulator with a modified open control architecture and equipped with a pair of stereoscopic cameras, a force/torque sensor, and pneumatic tools. It is controlled through a haptic device in a mock-up environment. The industrial robot controller has been replaced by a single industrial PC running Xenomai that has a real-time connection to both the robot controller and another Linux PC running as the controller for the haptic device. The new remote handling control environment enables further development of advanced control schemes for autonomous and semi-autonomous manipulation tasks. This setup benefits from a stereovision system for accurate tracking of the target objects with irregular shapes. The overall environmental setup successfully demonstrates the required robustness and precision that remote handling tasks need.
Framing of grid cells within and beyond navigation boundaries
Savelli, Francesco; Luck, JD; Knierim, James J
2017-01-01
Grid cells represent an ideal candidate to investigate the allocentric determinants of the brain’s cognitive map. Most studies of grid cells emphasized the roles of geometric boundaries within the navigational range of the animal. Behaviors such as novel route-taking between local environments indicate the presence of additional inputs from remote cues beyond the navigational borders. To investigate these influences, we recorded grid cells as rats explored an open-field platform in a room with salient, remote cues. The platform was rotated or translated relative to the room frame of reference. Although the local, geometric frame of reference often exerted the strongest control over the grids, the remote cues demonstrated a consistent, sometimes dominant, countervailing influence. Thus, grid cells are controlled by both local geometric boundaries and remote spatial cues, consistent with prior studies of hippocampal place cells and providing a rich representational repertoire to support complex navigational (and perhaps mnemonic) processes. DOI: http://dx.doi.org/10.7554/eLife.21354.001 PMID:28084992
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. Burgess; M. Noakes; P. Spampinato
This paper presents an evaluation of robotics and remote handling technologies that have the potential to increase the efficiency of handling waste packages at the proposed Yucca Mountain High-Level Nuclear Waste Repository. It is expected that increased efficiency will reduce the cost of operations. The goal of this work was to identify technologies for consideration as potential projects that the U.S. Department of Energy Office of Civilian Radioactive Waste Management, Office of Science and Technology International Programs, could support in the near future, and to assess their ''payback'' value. The evaluation took into account the robotics and remote handling capabilitiesmore » planned for incorporation into the current baseline design for the repository, for both surface and subsurface operations. The evaluation, completed at the end of fiscal year 2004, identified where significant advantages in operating efficiencies could accrue by implementing any given robotics technology or approach, and included a road map for a multiyear R&D program for improvements to remote handling technology that support operating enhancements.« less
Human-like robots for space and hazardous environments
NASA Technical Reports Server (NTRS)
1994-01-01
The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.
Human-like robots for space and hazardous environments
NASA Astrophysics Data System (ADS)
The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.
Control of free-flying space robot manipulator systems
NASA Technical Reports Server (NTRS)
Cannon, Robert H., Jr.
1977-01-01
To accelerate the development of multi-armed, free-flying satellite manipulators, a fixed-base cooperative manipulation facility is being developed. The work performed on multiple arm cooperation on a free-flying robot is summarized. Research is also summarized on global navigation and control of free-flying space robots. The Locomotion Enhancement via Arm Pushoff (LEAP) approach is described and progress to date is presented.
Evaluating Intention to Use Remote Robotics Experimentation in Programming Courses
ERIC Educational Resources Information Center
Cheng, Pericles L.
2017-01-01
The Digital Agenda for Europe (2015) states that there will be 825,000 unfilled vacancies for Information and Communications Technology by 2020. This lack of IT professionals stems from the small number of students graduating in computer science. To retain more students in the field, teachers can use remote robotic experiments to explain difficult…
Human-like robots for space and hazardous environments
NASA Technical Reports Server (NTRS)
Cogley, Allen; Gustafson, David; White, Warren; Dyer, Ruth; Hampton, Tom (Editor); Freise, Jon (Editor)
1990-01-01
The three year goal for this NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of rough terrain crossing, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation and path planning skills. These goals came from the concept that the robot should have the abilities of both a planetary rover and a hazardous waste site scout.
Human-like robots for space and hazardous environments
NASA Astrophysics Data System (ADS)
Cogley, Allen; Gustafson, David; White, Warren; Dyer, Ruth; Hampton, Tom; Freise, Jon
The three year goal for this NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of rough terrain crossing, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation and path planning skills. These goals came from the concept that the robot should have the abilities of both a planetary rover and a hazardous waste site scout.
Online Learning Techniques for Improving Robot Navigation in Unfamiliar Domains
2010-12-01
In In Proceedings of the 1996 Symposium on Human Interaction and Complex Systems, pages 276–283, 1996. 6.1 [15] Colin Campbell and Kristin P. Bennett...ISBN 0-262-19450-3. 5.1 [104] Jean Scholtz, Jeff Young, Jill L. Drury , and Holly A. Yanco. Evaluation of human-robot interaction awareness in search...2004. 6.1 [147] Holly A. Yanco and Jill L. Drury . Rescuing interfaces: A multi-year study of human-robot interaction at the AAAI robot rescue
Understanding of and applications for robot vision guidance at KSC
NASA Technical Reports Server (NTRS)
Shawaga, Lawrence M.
1988-01-01
The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.
A Robot to Help Make the Rounds
NASA Technical Reports Server (NTRS)
2003-01-01
This paper presents a discussion on the Pyxis HelpMate SecurePak (SP) trackless robotic courier designed by Transitions Research Corporation, to navigate autonomously throughout medical facilities, transporting pharmaceuticals, laboratory specimens, equipment, supplies, meals, medical records, and radiology films between support departments and nursing floors.
Kennedy Space Center, Space Shuttle Processing, and International Space Station Program Overview
NASA Technical Reports Server (NTRS)
Higginbotham, Scott Alan
2011-01-01
Topics include: International Space Station assembly sequence; Electrical power substation; Thermal control substation; Guidance, navigation and control; Command data and handling; Robotics; Human and robotic integration; Additional modes of re-supply; NASA and International partner control centers; Space Shuttle ground operations.
Point-of-Care Programming for Neuromodulation: A Feasibility Study Using Remote Presence.
Mendez, Ivar; Song, Michael; Chiasson, Paula; Bustamante, Luis
2013-01-01
The expansion of neuromodulation and its indications has resulted in hundreds of thousands of patients with implanted devices worldwide. Because all patients require programming, this growth has created a heavy burden on neuromodulation centers and patients. Remote point-of-care programming may provide patients with real-time access to neuromodulation expertise in their communities. To test the feasibility of remotely programming a neuromodulation device using a remote-presence robot and to determine the ability of an expert programmer to telementor a nonexpert in programming the device. A remote-presence robot (RP-7) was used for remote programming. Twenty patients were randomly assigned to either conventional programming or a robotic session. The expert remotely mentored 10 nurses with no previous experience to program the devices of patients assigned to the remote-presence sessions. Accuracy of programming, adverse events, and satisfaction scores for all participants were assessed. There was no difference in the accuracy or clinical outcomes of programming between the standard and remote-presence sessions. No adverse events occurred in any session. The patients, nurses, and the expert programmer expressed high satisfaction scores with the remote-presence sessions. This study establishes the proof-of-principle that remote programming of neuromodulation devices using telepresence and expert telementoring of an individual with no previous experience to accurately program a device is feasible. We envision a time in the future when patients with implanted devices will have real-time access to neuromodulation expertise from the comfort of their own home.
Shepherd, Robert F.; Ilievski, Filip; Choi, Wonjae; Morin, Stephen A.; Stokes, Adam A.; Mazzeo, Aaron D.; Chen, Xin; Wang, Michael; Whitesides, George M.
2011-01-01
This manuscript describes a unique class of locomotive robot: A soft robot, composed exclusively of soft materials (elastomeric polymers), which is inspired by animals (e.g., squid, starfish, worms) that do not have hard internal skeletons. Soft lithography was used to fabricate a pneumatically actuated robot capable of sophisticated locomotion (e.g., fluid movement of limbs and multiple gaits). This robot is quadrupedal; it uses no sensors, only five actuators, and a simple pneumatic valving system that operates at low pressures (< 10 psi). A combination of crawling and undulation gaits allowed this robot to navigate a difficult obstacle. This demonstration illustrates an advantage of soft robotics: They are systems in which simple types of actuation produce complex motion. PMID:22123978
Automatic Operation For A Robot Lawn Mower
NASA Astrophysics Data System (ADS)
Huang, Y. Y.; Cao, Z. L.; Oh, S. J.; Kattan, E. U.; Hall, E. L.
1987-02-01
A domestic mobile robot, lawn mower, which performs the automatic operation mode, has been built up in the Center of Robotics Research, University of Cincinnati. The robot lawn mower automatically completes its work with the region filling operation, a new kind of path planning for mobile robots. Some strategies for region filling of path planning have been developed for a partly-known or a unknown environment. Also, an advanced omnidirectional navigation system and a multisensor-based control system are used in the automatic operation. Research on the robot lawn mower, especially on the region filling of path planning, is significant in industrial and agricultural applications.
Interactive Exploration Robots: Human-Robotic Collaboration and Interactions
NASA Technical Reports Server (NTRS)
Fong, Terry
2017-01-01
For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.
2013-09-01
Width Modulation QuarC Quanser Real-time Control RC Remote Controlled RPV Remotely Piloted Vehicles SLAM Simultaneous Localization and Mapping UAV...development of the following systems: 1. Navigation (GPS, Lidar , etc.) 2. Communication (Datalink) 3. Ground Control Station (GUI, software programming
Lunar rover technology demonstrations with Dante and Ratler
NASA Technical Reports Server (NTRS)
Krotkov, Eric; Bares, John; Katragadda, Lalitesh; Simmons, Reid; Whittaker, Red
1994-01-01
Carnegie Mellon University has undertaken a research, development, and demonstration program to enable a robotic lunar mission. The two-year mission scenario is to traverse 1,000 kilometers, revisiting the historic sites of Apollo 11, Surveyor 5, Ranger 8, Apollo 17, and Lunokhod 2, and to return continuous live video amounting to more than 11 terabytes of data. Our vision blends autonomously safeguarded user driving with autonomous operation augmented with rich visual feedback, in order to enable facile interaction and exploration. The resulting experience is intended to attract mass participation and evoke strong public interest in lunar exploration. The encompassing program that forwards this work is the Lunar Rover Initiative (LRI). Two concrete technology demonstration projects currently advancing the Lunar Rover Initiative are: (1) The Dante/Mt. Spurr project, which, at the time of this writing, is sending the walking robot Dante to explore the Mt. Spurr volcano, in rough terrain that is a realistic planetary analogue. This project will generate insights into robot system robustness in harsh environments, and into remote operation by novices; and (2) The Lunar Rover Demonstration project, which is developing and evaluating key technologies for navigation, teleoperation, and user interfaces in terrestrial demonstrations. The project timetable calls for a number of terrestrial traverses incorporating teleoperation and autonomy including natural terrain this year, 10 km in 1995. and 100 km in 1996. This paper will discuss the goals of the Lunar Rover Initiative and then focus on the present state of the Dante/Mt. Spurr and Lunar Rover Demonstration projects.
ROMPS critical design review. Volume 2: Robot module design documentation
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1992-01-01
The robot module design documentation for the Remote Operated Materials Processing in Space (ROMPS) experiment is compiled. This volume presents the following information: robot module modifications; Easylab commands definitions and flowcharts; Easylab program definitions and flowcharts; robot module fault conditions and structure charts; and C-DOC flow structure and cross references.
Current status of endovascular catheter robotics.
Lumsden, Alan B; Bismuth, Jean
2018-06-01
In this review, we will detail the evolution of endovascular therapy as the basis for the development of catheter-based robotics. In parallel, we will outline the evolution of robotics in the surgical space and how the convergence of technology and the entrepreneurs who push this evolution have led to the development of endovascular robots. The current state-of-the-art and future directions and potential are summarized for the reader. Information in this review has been drawn primarily from our personal clinical and preclinical experience in use of catheter robotics, coupled with some ground-breaking work reported from a few other major centers who have embraced the technology's capabilities and opportunities. Several case studies demonstrating the unique capabilities of a precisely controlled catheter are presented. Most of the preclinical work was performed in the advanced imaging and navigation laboratory. In this unique facility, the interface of advanced imaging techniques and robotic guidance is being explored. Although this procedure employs a very high-tech approach to navigation inside the endovascular space, we have conveyed the kind of opportunities that this technology affords to integrate 3D imaging and 3D control. Further, we present the opportunity of semi-autonomous motion of these devices to a target. For the interventionist, enhanced precision can be achieved in a nearly radiation-free environment.
Ferrigno, Giancarlo; Baroni, Guido; Casolo, Federico; De Momi, Elena; Gini, Giuseppina; Matteucci, Matteo; Pedrocchi, Alessandra
2011-01-01
Information and communication technology (ICT) and mechatronics play a basic role in medical robotics and computer-aided therapy. In the last three decades, in fact, ICT technology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service and professional robotics as well as other technologies, as surgical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the recovery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better hardware (e.g., microrobots) and more sophisticated algorithms(e.g., machine learning and other cognitive approaches)has largely increased the field of applications of these technologies,making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management.
A tesselated probabilistic representation for spatial robot perception and navigation
NASA Technical Reports Server (NTRS)
Elfes, Alberto
1989-01-01
The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations.
Thorough exploration of complex environments with a space-based potential field
NASA Astrophysics Data System (ADS)
Kenealy, Alina; Primiano, Nicholas; Keyes, Alex; Lyons, Damian M.
2015-01-01
Robotic exploration, for the purposes of search and rescue or explosive device detection, can be improved by using a team of multiple robots. Potential field navigation methods offer natural and efficient distributed exploration algorithms in which team members are mutually repelled to spread out and cover the area efficiently. However, they also suffer from field minima issues. Liu and Lyons proposed a Space-Based Potential Field (SBPF) algorithm that disperses robots efficiently and also ensures they are driven in a distributed fashion to cover complex geometry. In this paper, the approach is modified to handle two problems with the original SBPF method: fast exploration of enclosed spaces, and fast navigation of convex obstacles. Firstly, a "gate-sensing" function was implemented. The function draws the robot to narrow openings, such as doors or corridors that it might otherwise pass by, to ensure every room can be explored. Secondly, an improved obstacle field conveyor belt function was developed which allows the robot to avoid walls and barriers while using their surface as a motion guide to avoid being trapped. Simulation results, where the modified SPBF program controls the MobileSim Pioneer 3-AT simulator program, are presented for a selection of maps that capture difficult to explore geometries. Physical robot results are also presented, where a team of Pioneer 3-AT robots is controlled by the modified SBPF program. Data collected prior to the improvements, new simulation results, and robot experiments are presented as evidence of performance improvements.
ODYSSEUS autonomous walking robot: The leg/arm design
NASA Technical Reports Server (NTRS)
Bourbakis, N. G.; Maas, M.; Tascillo, A.; Vandewinckel, C.
1994-01-01
ODYSSEUS is an autonomous walking robot, which makes use of three wheels and three legs for its movement in the free navigation space. More specifically, it makes use of its autonomous wheels to move around in an environment where the surface is smooth and not uneven. However, in the case that there are small height obstacles, stairs, or small height unevenness in the navigation environment, the robot makes use of both wheels and legs to travel efficiently. In this paper we present the detailed hardware design and the simulated behavior of the extended leg/arm part of the robot, since it plays a very significant role in the robot actions (movements, selection of objects, etc.). In particular, the leg/arm consists of three major parts: The first part is a pipe attached to the robot base with a flexible 3-D joint. This pipe has a rotated bar as an extended part, which terminates in a 3-D flexible joint. The second part of the leg/arm is also a pipe similar to the first. The extended bar of the second part ends at a 2-D joint. The last part of the leg/arm is a clip-hand. It is used for selecting several small weight and size objects, and when it is in a 'closed' mode, it is used as a supporting part of the robot leg. The entire leg/arm part is controlled and synchronized by a microcontroller (68CH11) attached to the robot base.
Stanford Aerospace Research Laboratory research overview
NASA Technical Reports Server (NTRS)
Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.
1993-01-01
Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator.
A 6-DOF parallel bone-grinding robot for cervical disc replacement surgery.
Tian, Heqiang; Wang, Chenchen; Dang, Xiaoqing; Sun, Lining
2017-12-01
Artificial cervical disc replacement surgery has become an effective and main treatment method for cervical disease, which has become a more common and serious problem for people with sedentary work. To improve cervical disc replacement surgery significantly, a 6-DOF parallel bone-grinding robot is developed for cervical bone-grinding by image navigation and surgical plan. The bone-grinding robot including mechanical design and low level control is designed. The bone-grinding robot navigation is realized by optical positioning with spatial registration coordinate system defined. And a parametric robot bone-grinding plan and high level control have been developed for plane grinding for cervical top endplate and tail endplate grinding by a cylindrical grinding drill and spherical grinding for two articular surfaces of bones by a ball grinding drill. Finally, the surgical flow for a robot-assisted cervical disc replacement surgery procedure is present. The final experiments results verified the key technologies and performance of the robot-assisted surgery system concept excellently, which points out a promising clinical application with higher operability. Finally, study innovations, study limitations, and future works of this present study are discussed, and conclusions of this paper are also summarized further. This bone-grinding robot is still in the initial stage, and there are many problems to be solved from a clinical point of view. Moreover, the technique is promising and can give a good support for surgeons in future clinical work.
NASA Astrophysics Data System (ADS)
Endo, Yoichiro; Balloch, Jonathan C.; Grushin, Alexander; Lee, Mun Wai; Handelman, David
2016-05-01
Control of current tactical unmanned ground vehicles (UGVs) is typically accomplished through two alternative modes of operation, namely, low-level manual control using joysticks and high-level planning-based autonomous control. Each mode has its own merits as well as inherent mission-critical disadvantages. Low-level joystick control is vulnerable to communication delay and degradation, and high-level navigation often depends on uninterrupted GPS signals and/or energy-emissive (non-stealth) range sensors such as LIDAR for localization and mapping. To address these problems, we have developed a mid-level control technique where the operator semi-autonomously drives the robot relative to visible landmarks that are commonly recognizable by both humans and machines such as closed contours and structured lines. Our novel solution relies solely on optical and non-optical passive sensors and can be operated under GPS-denied, communication-degraded environments. To control the robot using these landmarks, we developed an interactive graphical user interface (GUI) that allows the operator to select landmarks in the robot's view and direct the robot relative to one or more of the landmarks. The integrated UGV control system was evaluated based on its ability to robustly navigate through indoor environments. The system was successfully field tested with QinetiQ North America's TALON UGV and Tactical Robot Controller (TRC), a ruggedized operator control unit (OCU). We found that the proposed system is indeed robust against communication delay and degradation, and provides the operator with steady and reliable control of the UGV in realistic tactical scenarios.
Deployment and early experience with remote-presence patient care in a community hospital.
Petelin, J B; Nelson, M E; Goodman, J
2007-01-01
The introduction of the RP6 (InTouch Health, Santa Barbara, CA, USA) remote-presence "robot" appears to offer a useful telemedicine device. The authors describe the deployment and early experience with the RP6 in a community hospital and provided a live demonstration of the system on April 16, 2005 during the Emerging Technologies Session of the 2005 SAGES Meeting in Fort Lauderdale, Florida. The RP6 is a 5-ft 4-in. tall, 215-pound robot that can be remotely controlled from an appropriately configured computer located anywhere on the Internet (i.e., on this planet). The system is composed of a control station (a computer at the central station), a mechanical robot, a wireless network (at the remote facility: the hospital), and a high-speed Internet connection at both the remote (hospital) and central locations. The robot itself houses a rechargeable power supply. Its hardware and software allows communication over the Internet with the central station, interpretation of commands from the central station, and conversion of the commands into mechanical and nonmechanical actions at the remote location, which are communicated back to the central station over the Internet. The RP6 system allows the central party (e.g., physician) to control the movements of the robot itself, see and hear at the remote location (hospital), and be seen and heard at the remote location (hospital) while not physically there. Deployment of the RP6 system at the hospital was accomplished in less than a day. The wireless network at the institution was already in place. The control station setup time ranged from 1 to 4 h and was dependent primarily on the quality of the Internet connection (bandwidth) at the remote locations. Patients who visited with the RP6 on their discharge day could be discharged more than 4 h earlier than with conventional visits, thereby freeing up hospital beds on a busy med-surg floor. Patient visits during "off hours" (nights and weekends) were three times more efficient than conventional visits during these times (20 min per visit vs 40-min round trip travel + 20-min visit). Patients and nursing personnel both expressed tremendous satisfaction with the remote-presence interaction. The authors' early experience suggests a significant benefit to patients, hospitals, and physicians with the use of RP6. The implications for future development are enormous.
Development and demonstration of a telerobotic excavation system
NASA Technical Reports Server (NTRS)
Burks, Barry L.; Thompson, David H.; Killough, Stephen M.; Dinkins, Marion A.
1994-01-01
Oak Ridge National Laboratory is developing remote excavation technologies for the Department of Energy's Office (DOE) of Technology Development, Robotics Technology Development Program, and also for the Department of Defense (DOD) Project Manager for Ammunition Logistics. This work is being done to meet the need for remote excavation and removal of radioactive and contaminated buried waste at several DOE sites and unexploded ordnance at DOD sites. System requirements are based on the need to uncover and remove waste from burial sites in a way that does not cause unnecessary personnel exposure or additional environmental contamination. Goals for the current project are to demonstrate dexterous control of a backhoe with force feedback and to implement robotic operations that will improve productivity. The Telerobotic Small Emplacement Excavator is a prototype system that incorporates the needed robotic and telerobotic capabilities on a commercially available platform. The ability to add remote dexterous teleoperation and robotic operating modes is intended to be adaptable to other commercially available excavator systems.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Control of free-flying space robot manipulator systems
NASA Technical Reports Server (NTRS)
Cannon, Robert H., Jr.
1990-01-01
New control techniques for self contained, autonomous free flying space robots were developed and tested experimentally. Free flying robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require human extravehicular activity (EVA). A set of research projects were developed and carried out using lab models of satellite robots and a flexible manipulator. The second generation space robot models use air cushion vehicle (ACV) technology to simulate in 2-D the drag free, zero g conditions of space. The current work is divided into 5 major projects: Global Navigation and Control of a Free Floating Robot, Cooperative Manipulation from a Free Flying Robot, Multiple Robot Cooperation, Thrusterless Robotic Locomotion, and Dynamic Payload Manipulation. These projects are examined in detail.
2014-05-19
CAPE CANAVERAL, Fla. – Students from Oakton Community College in Illinois prepare their robot for NASA’s Robotics Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Ben Smegelsky
2014-05-20
CAPE CANAVERAL, Fla. – College and university teams prepare their robots for NASA’s Robotics Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Ben Smegelsky
2014-05-20
CAPE CANAVERAL, Fla. – A college team prepares its robot for a trial run at NASA’s Robotics Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Ben Smegelsky
2014-05-19
CAPE CANAVERAL, Fla. – College students prepare their robot for NASA’s Robotics Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Ben Smegelsky
Development of a Robotic Colonoscopic Manipulation System, Using Haptic Feedback Algorithm
Woo, Jaehong; Choi, Jae Hyuk; Seo, Jong Tae
2017-01-01
Purpose Colonoscopy is one of the most effective diagnostic and therapeutic tools for colorectal diseases. We aim to propose a master-slave robotic colonoscopy that is controllable in remote site using conventional colonoscopy. Materials and Methods The master and slave robot were developed to use conventional flexible colonoscopy. The robotic colonoscopic procedure was performed using a colonoscope training model by one expert endoscopist and two unexperienced engineers. To provide the haptic sensation, the insertion force and the rotating torque were measured and sent to the master robot. Results A slave robot was developed to hold the colonoscopy and its knob, and perform insertion, rotation, and two tilting motions of colonoscope. A master robot was designed to teach motions of the slave robot. These measured force and torque were scaled down by one tenth to provide the operator with some reflection force and torque at the haptic device. The haptic sensation and feedback system was successful and helpful to feel the constrained force or torque in colon. The insertion time using robotic system decreased with repeated procedures. Conclusion This work proposed a robotic approach for colonoscopy using haptic feedback algorithm, and this robotic device would effectively perform colonoscopy with reduced burden and comparable safety for patients in remote site. PMID:27873506
A Coordinated Control Architecture for Disaster Response Robots
2016-01-01
to use these same algorithms to provide navigation Odometry for the vehicle motions when the robot is driving. Visual Odometry The YouTube link... depressed the accelerator pedal. We relied on the fact that the vehicle quickly comes to rest when the accelerator pedal is not being pressed. The
Telerobot local-remote control architecture for space flight program applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul; Steele, Robert; Long, Mark; Bon, Bruce; Beahan, John
1993-01-01
The JPL Supervisory Telerobotics (STELER) Laboratory has developed and demonstrated a unique local-remote robot control architecture which enables management of intermittent communication bus latencies and delays such as those expected for ground-remote operation of Space Station robotic systems via the Tracking and Data Relay Satellite System (TDRSS) communication platform. The current work at JPL in this area has focused on enhancing the technologies and transferring the control architecture to hardware and software environments which are more compatible with projected ground and space operational environments. At the local site, the operator updates the remote worksite model using stereo video and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. This capability runs on a single Silicon Graphics Inc. machine. The operator can employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the intended object. The remote site controller, called the Modular Telerobot Task Execution System (MOTES), runs in a multi-processor VME environment and performs the task sequencing, task execution, trajectory generation, closed loop force/torque control, task parameter monitoring, and reflex action. This paper describes the new STELER architecture implementation, and also documents the results of the recent autonomous docking task execution using the local site and MOTES.
Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements
Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.
2012-01-01
This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962
Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.
Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M
2012-01-01
This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Dual use display systems for telerobotics
NASA Technical Reports Server (NTRS)
Massimino, Michael J.; Meschler, Michael F.; Rodriguez, Alberto A.
1994-01-01
This paper describes a telerobotics display system, the Multi-mode Manipulator Display System (MMDS), that has applications for a variety of remotely controlled tasks. Designed primarily to assist astronauts with the control of space robotics systems, the MMDS has applications for ground control of space robotics as well as for toxic waste cleanup, undersea, remotely operated vehicles, and other environments which require remote operations. The MMDS has three modes: (1) Manipulator Position Display (MPD) mode, (2) Joint Angle Display (JAD) mode, and (3) Sensory Substitution (SS) mode. These three modes are discussed in the paper.
Robot Tracer with Visual Camera
NASA Astrophysics Data System (ADS)
Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin
2017-12-01
Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.
Robotic anesthesia: not the realm of science fiction any more.
Hemmerling, Thomas M; Terrasini, Nora
2012-12-01
Robots are present in surgery, to a much lesser extent in the field of anesthesia. The purpose of this review is to show the latest and most important findings in robotic anesthesia. Moreover, this review argues the importance and utility of robots in anesthesia. Over the years, many closed-loop systems have been developed; they were able to control only one or two of the three components of anesthesia: hypnosis, analgesia, or muscle relaxation. McSleepy controls all three components of anesthesia, from induction to emergence of anesthesia. Telemedical applications have not only led to remote monitoring but even to remotely controlled anesthesia, such as transcontinental anesthesia. A new closed-loop system for sedation, called Sedasys, could revolutionize the field of nonoperating room sedation. 'Manual robots' are used to help and replace anesthesiologists performing anesthesia procedures. Specific robots for intubation and nerve blocks have been developed and tested in humans. Robots can improve performance in anesthesia and healthcare. Closed-loop systems are the basis for pharmacological robots. Safe anesthetic care might be delivered through teleanesthesia whenever qualified personnel are not available or need support. Mechanical robots are being developed for anesthesia care.
Remote presence proctoring by using a wireless remote-control videoconferencing system.
Smith, C Daniel; Skandalakis, John E
2005-06-01
Remote presence in an operating room to allow an experienced surgeon to proctor a surgeon has been promised through robotics and telesurgery solutions. Although several such systems have been developed and commercialized, little progress has been made using telesurgery for anything more than live demonstrations of surgery. This pilot project explored the use of a new videoconferencing capability to determine if it offers advantages over existing systems. The video conferencing system used is a PC-based system with a flat screen monitor and an attached camera that is then mounted on a remotely controlled platform. This device is controlled from a remotely placed PC-based videoconferencing system computer outfitted with a joystick. Using the public Internet and a wireless router at the client site, a surgeon at the control station can manipulate the videoconferencing system. Controls include navigating the unit around the room and moving the flat screen/camera portion like a head looking up/down and right/left. This system (InTouch Medical, Santa Barbara, CA) was used to proctor medical students during an anatomy class cadaver dissection. The ability of the remote surgeon to effectively monitor the students' dissections and direct their activities was assessed subjectively by students and surgeon. This device was very effective at providing a controllable and interactive presence in the anatomy lab. Students felt they were interacting with a person rather than a video screen and quickly forgot that the surgeon was not in the room. The ability to move the device within the environment rather than just observe the environment from multiple fixed camera angles gave the surgeon a similar feel of true presence. A remote-controlled videoconferencing system provides a more real experience for both student and proctor. Future development of such a device could greatly facilitate progress in implementation of remote presence proctoring.
Achievement of a Sense of Operator Presence in Remote Manipulation.
1980-10-01
L., R. Sun and H. F. M. Van der Loos, "Terminal Device Centered Control of Manipulation for a Rehabilitative Robot ," Prepublished paper submitted to...87 Appendix B: Robot Institute of America Information . . . . . .. 90 Preliminary Results of Worldwide Survey . . . . . . . 91 Robot ...Manufacturers and Distributors . . . . . . . . 99 Robot Researchers . . . . . . . . . . . . . . . . .. 102 III LIST OF FIGURES Title Page Frontispiece
Learning for autonomous navigation
NASA Technical Reports Server (NTRS)
Angelova, Anelia; Howard, Andrew; Matthies, Larry; Tang, Benyang; Turmon, Michael; Mjolsness, Eric
2005-01-01
Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of 3-D sensors and by the difficulty of preprogramming systems to understand the traversability of the wide variety of terrain they can encounter.
Kinematic analysis and simulation of a substation inspection robot guided by magnetic sensor
NASA Astrophysics Data System (ADS)
Xiao, Peng; Luan, Yiqing; Wang, Haipeng; Li, Li; Li, Jianxiang
2017-01-01
In order to improve the performance of the magnetic navigation system used by substation inspection robot, the kinematic characteristics is analyzed based on a simplified magnetic guiding system model, and then the simulation process is executed to verify the reasonability of the whole analysis procedure. Finally, some suggestions are extracted out, which will be helpful to guide the design of the inspection robot system in the future.
Robotic air vehicle. Blending artificial intelligence with conventional software
NASA Technical Reports Server (NTRS)
Mcnulty, Christa; Graham, Joyce; Roewer, Paul
1987-01-01
The Robotic Air Vehicle (RAV) system is described. The program's objectives were to design, implement, and demonstrate cooperating expert systems for piloting robotic air vehicles. The development of this system merges conventional programming used in passive navigation with Artificial Intelligence techniques such as voice recognition, spatial reasoning, and expert systems. The individual components of the RAV system are discussed as well as their interactions with each other and how they operate as a system.
On-Line Point Positioning with Single Frame Camera Data
1992-03-15
tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.
NASA Technical Reports Server (NTRS)
1997-01-01
Developed largely through a Small Business Innovation Research contract through Langley Research Center, Interactive Picture Corporation's IPIX technology provides spherical photography, a panoramic 360-degrees. NASA found the technology appropriate for use in guiding space robots, in the space shuttle and space station programs, as well as research in cryogenic wind tunnels and for remote docking of spacecraft. Images of any location are captured in their entirety in a 360-degree immersive digital representation. The viewer can navigate to any desired direction within the image. Several car manufacturers already use IPIX to give viewers a look at their latest line-up of automobiles. Another application is for non-invasive surgeries. By using OmniScope, surgeons can look more closely at various parts of an organ with medical viewing instruments now in use. Potential applications of IPIX technology include viewing of homes for sale, hotel accommodations, museum sites, news events, and sports stadiums.
Learning and Prediction of Slip from Visual Information
NASA Technical Reports Server (NTRS)
Angelova, Anelia; Matthies, Larry; Helmick, Daniel; Perona, Pietro
2007-01-01
This paper presents an approach for slip prediction from a distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering such terrain can be very useful for better planning and avoiding these areas. To address this problem, terrain appearance and geometry information about map cells are correlated to the slip measured by the rover while traversing each cell. This relationship is learned from previous experience, so slip can be predicted remotely from visual information only. The proposed method consists of terrain type recognition and nonlinear regression modeling. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and woodchips. The final slip prediction error is about 20%. The system is intended for improved navigation on steep slopes and rough terrain for Mars rovers.
New Control Paradigms for Resources Saving: An Approach for Mobile Robots Navigation.
Socas, Rafael; Dormido, Raquel; Dormido, Sebastián
2018-01-18
In this work, an event-based control scheme is presented. The proposed system has been developed to solve control problems appearing in the field of Networked Control Systems (NCS). Several models and methodologies have been proposed to measure different resources consumptions. The use of bandwidth, computational load and energy resources have been investigated. This analysis shows how the parameters of the system impacts on the resources efficiency. Moreover, the proposed system has been compared with its equivalent discrete-time solution. In the experiments, an application of NCS for mobile robots navigation has been set up and its resource usage efficiency has been analysed.
New Control Paradigms for Resources Saving: An Approach for Mobile Robots Navigation
2018-01-01
In this work, an event-based control scheme is presented. The proposed system has been developed to solve control problems appearing in the field of Networked Control Systems (NCS). Several models and methodologies have been proposed to measure different resources consumptions. The use of bandwidth, computational load and energy resources have been investigated. This analysis shows how the parameters of the system impacts on the resources efficiency. Moreover, the proposed system has been compared with its equivalent discrete-time solution. In the experiments, an application of NCS for mobile robots navigation has been set up and its resource usage efficiency has been analysed. PMID:29346321
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Key Issues for Navigation and Time Dissemination in NASA's Space Exploration Program
NASA Technical Reports Server (NTRS)
Nelson, R. A.; Brodsky, B.; Oria, A. J.; Connolly, J. W.; Sands, O. S.; Welch, B. W.; Ely T.; Orr, R.; Schuchman, L.
2006-01-01
The renewed emphasis on robotic and human missions within NASA's space exploration program warrants a detailed consideration of how the positions of objects in space will be determined and tracked, whether they be spacecraft, human explorers, robots, surface vehicles, or science instrumentation. The Navigation Team within the NASA Space Communications Architecture Working Group (SCAWG) has addressed several key technical issues in this area and the principle findings are reported here. For navigation in the vicinity of the Moon, a variety of satellite constellations have been investigated that provide global or regional surface position determination and timely services analogous to those offered by GPS at Earth. In the vicinity of Mars, there are options for satellite constellations not available at the Moon due to the gravitational perturbations from Earth, such as two satellites in an aerostationary orbit. Alternate methods of radiometric navigation as considered, including one- and two-way signals, as well as autonomous navigation. The use of a software radio capable of receiving all available signal sources, such as GPS, pseudolites, and communication channels, is discussed. Methods of time transfer and dissemination are also considered in this paper.
Indoor Positioning System Using Magnetic Field Map Navigation and an Encoder System
Kim, Han-Sol; Seo, Woojin; Baek, Kwang-Ryul
2017-01-01
In the indoor environment, variation of the magnetic field is caused by building structures, and magnetic field map navigation is based on this feature. In order to estimate position using this navigation, a three-axis magnetic field must be measured at every point to build a magnetic field map. After the magnetic field map is obtained, the position of the mobile robot can be estimated with a likelihood function whereby the measured magnetic field data and the magnetic field map are used. However, if only magnetic field map navigation is used, the estimated position can have large errors. In order to improve performance, we propose a particle filter system that integrates magnetic field map navigation and an encoder system. In this paper, multiple magnetic sensors and three magnetic field maps (a horizontal intensity map, a vertical intensity map, and a direction information map) are used to update the weights of particles. As a result, the proposed system estimates the position and orientation of a mobile robot more accurately than previous systems. Also, when the number of magnetic sensors increases, this paper shows that system performance improves. Finally, experiment results are shown from the proposed system that was implemented and evaluated. PMID:28327513
Indoor Positioning System Using Magnetic Field Map Navigation and an Encoder System.
Kim, Han-Sol; Seo, Woojin; Baek, Kwang-Ryul
2017-03-22
In the indoor environment, variation of the magnetic field is caused by building structures, and magnetic field map navigation is based on this feature. In order to estimate position using this navigation, a three-axis magnetic field must be measured at every point to build a magnetic field map. After the magnetic field map is obtained, the position of the mobile robot can be estimated with a likelihood function whereby the measured magnetic field data and the magnetic field map are used. However, if only magnetic field map navigation is used, the estimated position can have large errors. In order to improve performance, we propose a particle filter system that integrates magnetic field map navigation and an encoder system. In this paper, multiple magnetic sensors and three magnetic field maps (a horizontal intensity map, a vertical intensity map, and a direction information map) are used to update the weights of particles. As a result, the proposed system estimates the position and orientation of a mobile robot more accurately than previous systems. Also, when the number of magnetic sensors increases, this paper shows that system performance improves. Finally, experiment results are shown from the proposed system that was implemented and evaluated.
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
The United States Space Exploration Initiative (SEI) calls for the charting of a new and evolving manned course to the Moon, Mars, and beyond. This paper discusses key challenges in providing effective deep space telecommunications, navigation, and information management (TNIM) architectures and designs for Mars exploration support. The fundamental objectives are to provide the mission with means to monitor and control mission elements, acquire engineering, science, and navigation data, compute state vectors and navigate, and move these data efficiently and automatically between mission nodes for timely analysis and decision-making. Although these objectives do not depart, fundamentally, from those evolved over the past 30 years in supporting deep space robotic exploration, there are several new issues. This paper focuses on summarizing new requirements, identifying related issues and challenges, responding with concepts and strategies which are enabling, and, finally, describing candidate architectures, and driving technologies. The design challenges include the attainment of: 1) manageable interfaces in a large distributed system, 2) highly unattended operations for in-situ Mars telecommunications and navigation functions, 3) robust connectivity for manned and robotic links, 4) information management for efficient and reliable interchange of data between mission nodes, and 5) an adequate Mars-Earth data rate.
[Surgical robotics, short state of the art and prospects].
Gravez, P
2003-11-01
State-of-the-art robotized systems developed for surgery are either remotely controlled manipulators that duplicate gestures made by the surgeon (endoscopic surgery applications), or automated robots that execute trajectories defined relatively to pre-operative medical imaging (neurosurgery and orthopaedic surgery). This generation of systems primarily applies existing robotics technologies (the remote handling systems and the so-called "industrial robots") to current surgical practices. It has contributed to validate the huge potential of surgical robotics, but it suffers from several drawbacks, mainly high costs, excessive dimensions and some lack of user-friendliness. Nevertheless, technological progress let us anticipate the appearance in the near future of miniaturised surgical robots able to assist the gesture of the surgeon and to enhance his perception of the operation at hand. Due to many in-the-body articulated links, these systems will have the capability to perform complex minimally invasive gestures without obstructing the operating theatre. They will also combine the facility of manual piloting with the accuracy and increased safety of computer control, guiding the gestures of the human without offending to his freedom of action. Lastly, they will allow the surgeon to feel the mechanical properties of the tissues he is operating through a genuine "remote palpation" function. Most probably, such technological evolutions will lead the way to redesigned surgical procedures taking place inside new operating rooms featuring a better integration of all equipments and favouring cooperative work from multidisciplinary and sometimes geographically distributed medical staff.
Lidar Systems for Precision Navigation and Safe Landing on Planetary Bodies
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin; Pierrottet, Diego F.; Petway, Larry B.; Hines, Glenn D.; Roback, Vincent E.
2011-01-01
The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and manned vehicles with a high degree of precision. Currently, NASA is developing novel lidar sensors aimed at needs of future planetary landing missions. These lidar sensors are a 3-Dimensional Imaging Flash Lidar, a Doppler Lidar, and a Laser Altimeter. The Flash Lidar is capable of generating elevation maps of the terrain that indicate hazardous features such as rocks, craters, and steep slopes. The elevation maps collected during the approach phase of a landing vehicle, at about 1 km above the ground, can be used to determine the most suitable safe landing site. The Doppler Lidar provides highly accurate ground relative velocity and distance data allowing for precision navigation to the landing site. Our Doppler lidar utilizes three laser beams pointed to different directions to measure line of sight velocities and ranges to the ground from altitudes of over 2 km. Throughout the landing trajectory starting at altitudes of about 20 km, the Laser Altimeter can provide very accurate ground relative altitude measurements that are used to improve the vehicle position knowledge obtained from the vehicle navigation system. At altitudes from approximately 15 km to 10 km, either the Laser Altimeter or the Flash Lidar can be used to generate contour maps of the terrain, identifying known surface features such as craters, to perform Terrain relative Navigation thus further reducing the vehicle s relative position error. This paper describes the operational capabilities of each lidar sensor and provides a status of their development. Keywords: Laser Remote Sensing, Laser Radar, Doppler Lidar, Flash Lidar, 3-D Imaging, Laser Altimeter, Precession Landing, Hazard Detection
2014-05-22
CAPE CANAVERAL, Fla. – A mining team exits the Caterpillar Mining Area with its robot as another team prepares to lower its robot into the simulated Martian soil during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from colleges and universities around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
Robotic Lunar Rover Technologies and SEI Supporting Technologies at Sandia National Laboratories
NASA Technical Reports Server (NTRS)
Klarer, Paul R.
1992-01-01
Existing robotic rover technologies at Sandia National Laboratories (SNL) can be applied toward the realization of a robotic lunar rover mission in the near term. Recent activities at the SNL-RVR have demonstrated the utility of existing rover technologies for performing remote field geology tasks similar to those envisioned on a robotic lunar rover mission. Specific technologies demonstrated include low-data-rate teleoperation, multivehicle control, remote site and sample inspection, standard bandwidth stereo vision, and autonomous path following based on both internal dead reckoning and an external position location update system. These activities serve to support the use of robotic rovers for an early return to the lunar surface by demonstrating capabilities that are attainable with off-the-shelf technology and existing control techniques. The breadth of technical activities at SNL provides many supporting technology areas for robotic rover development. These range from core competency areas and microsensor fabrication facilities, to actual space qualification of flight components that are designed and fabricated in-house.
Cognitive memory and mapping in a brain-like system for robotic navigation.
Tang, Huajin; Huang, Weiwei; Narayanamoorthy, Aditya; Yan, Rui
2017-03-01
Electrophysiological studies in animals may provide a great insight into developing brain-like models of spatial cognition for robots. These studies suggest that the spatial ability of animals requires proper functioning of the hippocampus and the entorhinal cortex (EC). The involvement of the hippocampus in spatial cognition has been extensively studied, both in animal as well as in theoretical studies, such as in the brain-based models by Edelman and colleagues. In this work, we extend these earlier models, with a particular focus on the spatial coding properties of the EC and how it functions as an interface between the hippocampus and the neocortex, as proposed by previous work. By realizing the cognitive memory and mapping functions of the hippocampus and the EC, respectively, we develop a neurobiologically-inspired system to enable a mobile robot to perform task-based navigation in a maze environment. Copyright © 2016 Elsevier Ltd. All rights reserved.
HERMIES-3: A step toward autonomous mobility, manipulation, and perception
NASA Technical Reports Server (NTRS)
Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.
1989-01-01
HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.
Tracked robot controllers for climbing obstacles autonomously
NASA Astrophysics Data System (ADS)
Vincent, Isabelle
2009-05-01
Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.
Son, Jaebum; Cho, Chang Nho; Kim, Kwang Gi; Chang, Tae Young; Jung, Hyunchul; Kim, Sung Chun; Kim, Min-Tae; Yang, Nari; Kim, Tae-Yun; Sohn, Dae Kyung
2015-06-01
Natural orifice transluminal endoscopic surgery (NOTES) is an emerging surgical technique. We aimed to design, create, and evaluate a new semi-automatic snake robot for NOTES. The snake robot employs the characteristics of both a manual endoscope and a multi-segment snake robot. This robot is inserted and retracted manually, like a classical endoscope, while its shape is controlled using embedded robot technology. The feasibility of a prototype robot for NOTES was evaluated in animals and human cadavers. The transverse stiffness and maneuverability of the snake robot appeared satisfactory. It could be advanced through the anus as far as the peritoneal cavity without any injury to adjacent organs. Preclinical tests showed that the device could navigate the peritoneal cavity. The snake robot has advantages of high transverse force and intuitive control. This new robot may be clinically superior to conventional tools for transanal NOTES.
Hand Gesture Based Wireless Robotic Arm Control for Agricultural Applications
NASA Astrophysics Data System (ADS)
Kannan Megalingam, Rajesh; Bandhyopadhyay, Shiva; Vamsy Vivek, Gedela; Juned Rahi, Muhammad
2017-08-01
One of the major challenges in agriculture is harvesting. It is very hard and sometimes even unsafe for workers to go to each plant and pluck fruits. Robotic systems are increasingly combined with new technologies to automate or semi automate labour intensive work, such as e.g. grape harvesting. In this work we propose a semi-automatic method for aid in harvesting fruits and hence increase productivity per man hour. A robotic arm fixed to a rover roams in the in orchard and the user can control it remotely using the hand glove fixed with various sensors. These sensors can position the robotic arm remotely to harvest the fruits. In this paper we discuss the design of hand glove fixed with various sensors, design of 4 DoF robotic arm and the wireless control interface. In addition the setup of the system and the testing and evaluation under lab conditions are also presented in this paper.
2014-05-21
CAPE CANAVERAL, Fla. – Team members from the University of Florida in Gainesville prepare their robot for the mining portion of NASA's 2014 Robotics Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Frankie Martin
2014-05-21
CAPE CANAVERAL, Fla. – The Hawai'l Marsbot Team members from Kapi'olani Community College in Hawaii prepare their robot for the mining portion of NASA's 2014 Robotics Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Frankie Martin
2014-05-22
CAPE CANAVERAL, Fla. – College and university teams prepare their robots for the mining portion of NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-23
CAPE CANAVERAL, Fla. -- The University of North Dakota's robotic miner digs in the simulated Martian soil in the Caterpillar Mining Arena on the final day of NASA's 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from colleges and universities around the U.S. designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-21
CAPE CANAVERAL, Fla. – Competition judges monitor the progress of a robot digging in the simulated Martian soil in the Caterpillar Mining Arena during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-21
CAPE CANAVERAL, Fla. – A robot digs in the simulated Martian soil in the Caterpillar Mining Arena during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-23
CAPE CANAVERAL, Fla. -- Team members prepare their robot to dig in simulated Martian soil in the Caterpillar Mining Arena on the final day of NASA's 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from colleges and universities around the U.S. designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-22
CAPE CANAVERAL, Fla. – Competition judges monitor two team's robots digging in the simulated Martian soil in the Caterpillar Mining Arena during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from colleges and universities around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-21
CAPE CANAVERAL, Fla. – Competition judges monitor the progress of a robot digging in the simulated Martian soil in the Caterpillar Mining Arena during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from colleges and universities around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-21
CAPE CANAVERAL, Fla. – A robot digs in the simulated Martian soil in the Caterpillar Mining Arena during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from colleges and universities around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-22
CAPE CANAVERAL, Fla. – Team members check their robot before the start of a mining session in simulated Martian soil in the Caterpillar Mining Arena during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from colleges and universities around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-21
CAPE CANAVERAL, Fla. – Team members from the University of Alabama prepare their robot for the mining portion of NASA's 2014 Robotics Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Frankie Martin
2014-05-21
CAPE CANAVERAL, Fla. – Team members from the University of North Dakota prepare their robot for the mining portion of NASA's 2014 Robotics Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Frankie Martin
Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.
Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel
2016-05-25
We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.
Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.
Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel
2018-03-01
We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.
Watching elderly and disabled person's physical condition by remotely controlled monorail robot
NASA Astrophysics Data System (ADS)
Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru
2001-10-01
We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.
Low computation vision-based navigation for a Martian rover
NASA Technical Reports Server (NTRS)
Gavin, Andrew S.; Brooks, Rodney A.
1994-01-01
Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.
Developing Autonomous Vehicles That Learn to Navigate by Mimicking Human Behavior
2006-09-28
navigate in an unstructured environment to a specific target or location. 15. SUBJECT TERMS autonomous vehicles , fuzzy logic, learning behavior...ANSI-Std Z39-18 Developing Autonomous Vehicles That Learn to Navigate by Mimicking Human Behavior FINAL REPORT 9/28/2006 Dean B. Edwards Department...the future, as greater numbers of autonomous vehicles are employed, it is hoped that lower LONG-TERM GOALS Use LAGR (Learning Applied to Ground Robots
Three-dimensional motor schema based navigation
NASA Technical Reports Server (NTRS)
Arkin, Ronald C.
1989-01-01
Reactive schema-based navigation is possible in space domains by extending the methods developed for ground-based navigation found within the Autonomous Robot Architecture (AuRA). Reformulation of two dimensional motor schemas for three dimensional applications is a straightforward process. The manifold advantages of schema-based control persist, including modular development, amenability to distributed processing, and responsiveness to environmental sensing. Simulation results show the feasibility of this methodology for space docking operations in a cluttered work area.
ERIC Educational Resources Information Center
Buckland, Miram R.
1985-01-01
Sixth graders built working "robots" (or grasping bars) for remote control use during a unit on simple mechanics. Steps for making a robot are presented, including: cutting the wood, drilling and nailing, assembling the jaws, and making them work. The "jaws," used to pick up objects, illustrate principles of levers. (DH)
Swerdlow, Daniel R; Cleary, Kevin; Wilson, Emmanuel; Azizi-Koutenaei, Bamshad; Monfaredi, Reza
2017-04-01
Ultrasound imaging requires trained personnel. Advances in robotics and data transmission create the possibility of telesonography. This review introduces clinicians to current technical work in and potential applications of this developing capability. Telesonography offers advantages in hazardous or remote environments. Robotically assisted ultrasound can reduce stress injuries in sonographers and has potential utility during robotic surgery and interventional procedures.
Designing a Microhydraulically driven Mini robotic Squid
2016-05-20
applications for microrobots include remote monitoring, surveillance, search and rescue, nanoassembly, medicine, and in-vivo surgery . Robotics platforms...Secretary of Defense for Research and Engineering. Designing a Microhydraulically-driven Mini- robotic Squid by Kevin Dehan Meng B.S., U.S. Air...Committee on Graduate Students 2 Designing a Microhydraulically-driven Mini- robotic Squid by Kevin Dehan Meng Submitted to the Department
Hadfield works robotic controls in the Cupola Module
2013-01-10
ISS034-E-027317 (10 Jan. 2013) --- In the Cupola aboard the Earth-orbiting International Space Station, Canadian Space Agency astronaut Chris Hadfield, Expedition 34 flight engineer, works the controls at the Robotic workstation to maneuver the Space Station Remote Manipulator System (SSRMS) or CanadArm2 from its parked position to grapple the Mobile Remote Servicer (MRS) Base System (MBS) Power and Data Grapple Fixture 4 (PDGF-4).
Immune systems are not just for making you feel better: they are for controlling autonomous robots
NASA Astrophysics Data System (ADS)
Rosenblum, Mark
2005-05-01
The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.
From ships to robots: The social relations of sensing the world ocean.
Lehman, Jessica
2018-02-01
The dominant practices of physical oceanography have recently shifted from being based on ship-based ocean sampling and sensing to being based on remote and robotic sensing using satellites, drifting floats and remotely operated and autonomous underwater vehicles. What are the implications of this change for the social relations of oceanographic science? This paper contributes to efforts to address this question, pursuing a situated view of ocean sensing technologies so as to contextualize and analyze new representations of the sea, and interactions between individual scientists, technologies and the ocean. By taking a broad view on oceanography through a 50-year shift from ship-based to remote and robotic sensing, I show the ways in which new technologies may provide an opportunity to fight what Oreskes has called 'ideologies of scientific heroism'. In particular, new sensing relations may emphasize the contributions of women and scientists from less well-funded institutions, as well as the ways in which oceanographic knowledge is always partial and dependent on interactions between nonhuman animals, technologies, and different humans. Thus, I argue that remote and robotic sensing technologies do not simply create more abstracted relations between scientists and the sea, but also may provide opportunities for more equitable scientific practice and refigured sensing relations.
The new era of robotic neck surgery: The universal application of the retroauricular approach.
Byeon, Hyung Kwon; Koh, Yoon Woo
2015-12-01
Recent advances in technology has triggered the introduction of surgical robotics in the field of head and neck surgery and changed the landscape indefinitely. The advent of transoral robotic surgery and robotic thyroidectomy techniques has urged the extended applications of the robot to other neck surgeries including remote access surgeries. Based on earlier reports and our surgical experiences, this review will discuss in detail various robotic head and neck surgeries via retroauricular approach. © 2015 Wiley Periodicals, Inc.
Open core control software for surgical robots
Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B.; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo
2010-01-01
Object In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge “intelligent surgical robot” will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are “home-made” in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. Materials and methods In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a “force guide” for supporting operators to perform precise manipulation by using a master–slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. Results The Open Core Control software was implemented on a surgical master–slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a “force guide” on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. Conclusion In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement “General Principles of Software Validation” or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field. PMID:20033506
Learning classifier systems for single and multiple mobile robots in unstructured environments
NASA Astrophysics Data System (ADS)
Bay, John S.
1995-12-01
The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.
Adaptive Tracking Control for Robots With an Interneural Computing Scheme.
Tsai, Feng-Sheng; Hsu, Sheng-Yi; Shih, Mau-Hsiang
2018-04-01
Adaptive tracking control of mobile robots requires the ability to follow a trajectory generated by a moving target. The conventional analysis of adaptive tracking uses energy minimization to study the convergence and robustness of the tracking error when the mobile robot follows a desired trajectory. However, in the case that the moving target generates trajectories with uncertainties, a common Lyapunov-like function for energy minimization may be extremely difficult to determine. Here, to solve the adaptive tracking problem with uncertainties, we wish to implement an interneural computing scheme in the design of a mobile robot for behavior-based navigation. The behavior-based navigation adopts an adaptive plan of behavior patterns learning from the uncertainties of the environment. The characteristic feature of the interneural computing scheme is the use of neural path pruning with rewards and punishment interacting with the environment. On this basis, the mobile robot can be exploited to change its coupling weights in paths of neural connections systematically, which can then inhibit or enhance the effect of flow elimination in the dynamics of the evolutionary neural network. Such dynamical flow translation ultimately leads to robust sensory-to-motor transformations adapting to the uncertainties of the environment. A simulation result shows that the mobile robot with the interneural computing scheme can perform fault-tolerant behavior of tracking by maintaining suitable behavior patterns at high frequency levels.
NASA Astrophysics Data System (ADS)
Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling
2017-09-01
In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters
Control of autonomous robot using neural networks
NASA Astrophysics Data System (ADS)
Barton, Adam; Volna, Eva
2017-07-01
The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Sample Return Robot Challenge staff members confer before the team Survey robots makes it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
A robot from the University of Waterloo Robotics Team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Remote Navigation for Complex Arrhythmia
Suman-Horduna, Irina; Babu-Narayan, Sonya V; Ernst, Sabine
2013-01-01
Magnetic navigation has been established as an alternative to conventional, manual catheter navigation for invasive electrophysiology interventions about a decade ago. Besides the obvious advantage of radiation protection for the operator who is positioned remotely from the patient, there are additional benefits of steering the tip of a very floppy catheter. This manuscript reviews the published evidence from simple arrhythmias in patients with normal cardiac anatomy to the most complex congenital heart disease. This progress was made possible by the introduction of improved catheters and most importantly irrigated-tip electrodes. PMID:26835041
OzBot and haptics: remote surveillance to physical presence
NASA Astrophysics Data System (ADS)
Mullins, James; Fielding, Mick; Nahavandi, Saeid
2009-05-01
This paper reports on robotic and haptic technologies and capabilities developed for the law enforcement and defence community within Australia by the Centre for Intelligent Systems Research (CISR). The OzBot series of small and medium surveillance robots have been designed in Australia and evaluated by law enforcement and defence personnel to determine suitability and ruggedness in a variety of environments. Using custom developed digital electronics and featuring expandable data busses including RS485, I2C, RS232, video and Ethernet, the robots can be directly connected to many off the shelf payloads such as gas sensors, x-ray sources and camera systems including thermal and night vision. Differentiating the OzBot platform from its peers is its ability to be integrated directly with haptic technology or the 'haptic bubble' developed by CISR. Haptic interfaces allow an operator to physically 'feel' remote environments through position-force control and experience realistic force feedback. By adding the capability to remotely grasp an object, feel its weight, texture and other physical properties in real-time from the remote ground control unit, an operator's situational awareness is greatly improved through Haptic augmentation in an environment where remote-system feedback is often limited.
From self-assessment to frustration, a small step toward autonomy in robotic navigation
Jauffret, Adrien; Cuperlier, Nicolas; Tarroux, Philippe; Gaussier, Philippe
2013-01-01
Autonomy and self-improvement capabilities are still challenging in the fields of robotics and machine learning. Allowing a robot to autonomously navigate in wide and unknown environments not only requires a repertoire of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and for monitoring strategies. Monitoring strategies requires feedbacks on the behavior's quality, from a given fitness system in order to take correct decisions. In this work, we focus on how a second-order controller can be used to (1) manage behaviors according to the situation and (2) seek for human interactions to improve skills. Following an incremental and constructivist approach, we present a generic neural architecture, based on an on-line novelty detection algorithm that may be able to self-evaluate any sensory-motor strategies. This architecture learns contingencies between sensations and actions, giving the expected sensation from the previous perception. Prediction error, coming from surprising events, provides a measure of the quality of the underlying sensory-motor contingencies. We show how a simple second-order controller (emotional system) based on the prediction progress allows the system to regulate its behavior to solve complex navigation tasks and also succeeds in asking for help if it detects dead-lock situations. We propose that this model could be a key structure toward self-assessment and autonomy. We made several experiments that can account for such properties for two different strategies (road following and place cells based navigation) in different situations. PMID:24115931
Resources for Underwater Robotics Education
ERIC Educational Resources Information Center
Wallace, Michael L.; Freitas, William M.
2016-01-01
4-H clubs can build and program underwater robots from raw materials. An annotated resource list for engaging youth in building underwater remotely operated vehicles (ROVs) is provided. This article is a companion piece to the Research in Brief article "Building Teen Futures with Underwater Robotics" in this issue of the "Journal of…
DOT National Transportation Integrated Search
2005-01-01
This report presents the results of a project to finalize and apply a crawling robotic system for the remote visual inspection of high-mast light poles. The first part of the project focused on finalizing the prototype crawler robot hardware and cont...
Cardiac ultrasonography over 4G wireless networks using a tele-operated robot
Panayides, Andreas S.; Jossif, Antonis P.; Christoforou, Eftychios G.; Vieyres, Pierre; Novales, Cyril; Voskarides, Sotos; Pattichis, Constantinos S.
2016-01-01
This Letter proposes an end-to-end mobile tele-echography platform using a portable robot for remote cardiac ultrasonography. Performance evaluation investigates the capacity of long-term evolution (LTE) wireless networks to facilitate responsive robot tele-manipulation and real-time ultrasound video streaming that qualifies for clinical practice. Within this context, a thorough video coding standards comparison for cardiac ultrasound applications is performed, using a data set of ten ultrasound videos. Both objective and subjective (clinical) video quality assessment demonstrate that H.264/AVC and high efficiency video coding standards can achieve diagnostically-lossless video quality at bitrates well within the LTE supported data rates. Most importantly, reduced latencies experienced throughout the live tele-echography sessions allow the medical expert to remotely operate the robot in a responsive manner, using the wirelessly communicated cardiac ultrasound video to reach a diagnosis. Based on preliminary results documented in this Letter, the proposed robotised tele-echography platform can provide for reliable, remote diagnosis, achieving comparable quality of experience levels with in-hospital ultrasound examinations. PMID:27733929
Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case
Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique
2017-01-01
This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team KuuKulgur watches as their robots attempt the level one competition during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Retrievers team robot is seen as it attempts the level one challenge the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
DOE Robotic and Remote Systems Assistance to the Government of Japan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derek Wadsworth; Victor Walker
At the request of the Government of Japan, DOE did a complex wide survey of available remotely operated and robotic systems to assist in the initial assessment of the damage to the Fukushima Daiichi reactors following an earthquake and subsequent tsunami. As a result several radiation hardened cameras and a Talon robot were identified as systems that could immediately assist in the effort and were subsequently sent to Japan. These systems were transferred to the Government of Japan and used to map radiation levels surrounding the damaged facilities. This report describes the equipment, its use, data collected, and lessons learnedmore » from the experience.« less
Ciaramelli, Elisa; Rosenbaum, R Shayna; Solcz, Stephanie; Levine, Brian; Moscovitch, Morris
2010-05-01
The ability to navigate in a familiar environment depends on both an intact mental representation of allocentric spatial information and the integrity of systems supporting complementary egocentric representations. Although the hippocampus has been implicated in learning new allocentric spatial information, converging evidence suggests that the posterior parietal cortex (PPC) might support egocentric representations. To date, however, few studies have examined long-standing egocentric representations of environments learned long ago. Here we tested 7 patients with focal lesions in PPC and 12 normal controls in remote spatial memory tasks, including 2 tasks reportedly reliant on allocentric representations (distance and proximity judgments) and 2 tasks reportedly reliant on egocentric representations (landmark sequencing and route navigation; see Rosenbaum, Ziegler, Winocur, Grady, & Moscovitch, 2004). Patients were unimpaired in distance and proximity judgments. In contrast, they all failed in route navigation, and left-lesioned patients also showed marginally impaired performance in landmark sequencing. Patients' subjective experience associated with navigation was impoverished and disembodied compared with that of the controls. These results suggest that PPC is crucial for accessing remote spatial memories within an egocentric reference frame that enables both navigation and reexperiencing. Additionally, PPC was found to be necessary to implement specific aspects of allocentric navigation with high demands on spontaneous retrieval. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Kumamoto, Etsuko; Takahashi, Akihiro; Matsuoka, Yuichiro; Morita, Yoshinori; Kutsumi, Hiromu; Azuma, Takeshi; Kuroda, Kagayaki
2013-01-01
The MR-endoscope system can perform magnetic resonance (MR) imaging during endoscopy and show the images obtained by using endoscope and MR. The MR-endoscope system can acquire a high-spatial resolution MR image with an intraluminal radiofrequency (RF) coil, and the navigation system shows the scope's location and orientation inside the human body and indicates MR images with a scope view. In order to conveniently perform an endoscopy and MR procedure, the design of the user interface is very important because it provides useful information. In this study, we propose a navigation system using a wireless accelerometer-based controller with Bluetooth technology and a navigation technique to set the intraluminal RF coil using the navigation system. The feasibility of using this wireless controller in the MR shield room was validated via phantom examinations of the influence on MR procedures and navigation accuracy. In vitro examinations using an isolated porcine stomach demonstrated the effectiveness of the navigation technique using a wireless remote-control device.
NASA Technical Reports Server (NTRS)
Agah, Arvin; Bekey, George A.
1994-01-01
This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.
A design strategy for autonomous systems
NASA Technical Reports Server (NTRS)
Forster, Pete
1989-01-01
Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.
Navigation through unknown and dynamic open spaces using topological notions
NASA Astrophysics Data System (ADS)
Miguel-Tomé, Sergio
2018-04-01
Until now, most algorithms used for navigation have had the purpose of directing system towards one point in space. However, humans communicate tasks by specifying spatial relations among elements or places. In addition, the environments in which humans develop their activities are extremely dynamic. The only option that allows for successful navigation in dynamic and unknown environments is making real-time decisions. Therefore, robots capable of collaborating closely with human beings must be able to make decisions based on the local information registered by the sensors and interpret and express spatial relations. Furthermore, when one person is asked to perform a task in an environment, this task is communicated given a category of goals so the person does not need to be supervised. Thus, two problems appear when one wants to create multifunctional robots: how to navigate in dynamic and unknown environments using spatial relations and how to accomplish this without supervision. In this article, a new architecture to address the two cited problems is presented, called the topological qualitative navigation architecture. In previous works, a qualitative heuristic called the heuristic of topological qualitative semantics (HTQS) has been developed to establish and identify spatial relations. However, that heuristic only allows for establishing one spatial relation with a specific object. In contrast, navigation requires a temporal sequence of goals with different objects. The new architecture attains continuous generation of goals and resolves them using HTQS. Thus, the new architecture achieves autonomous navigation in dynamic or unknown open environments.
Using robotic telecommunications to triage pediatric disaster victims.
Burke, Rita V; Berg, Bridget M; Vee, Paul; Morton, Inge; Nager, Alan; Neches, Robert; Wetzel, Randall; Upperman, Jeffrey S
2012-01-01
During a disaster, hospitals may be overwhelmed and have an insufficient number of pediatric specialists available to care for injured children. The aim of this study was to determine the feasibility of remotely providing pediatric expertise via a robot to treat pediatric victims. In 2008, Los Angeles County held 2 drills involving telemedicine. The first was the Tri-Hospital drill in which 3 Los Angeles County hospitals, one being a pediatric hospital, participated. The disaster scenario involved a Metrolink train crash, resulting in a large surge of traumatic injuries. The second drill involved multiple agencies and was called the Great California Shakeout, a simulated earthquake exercise. The telemedicine equipment installed is an InTouch Health, Inc, Santa Barbara, CA robotic telecommunications system. We used mixed-methods to evaluate the use of telemedicine during these drills. Pediatric specialists successfully provided remote triage and treatment consults of victims via the robot. The robot proved to be a useful means to extend resources and provide expert consult if pediatric specialists were unable to physically be at the site. Telemedicine can be used in the delayed treatment areas as well as for training first receivers to collaborate with specialists in remote locations to triage and treat seriously injured pediatric victims. Copyright © 2012 Elsevier Inc. All rights reserved.
Yoo, Jeong-Ki; Kim, Jong-Hwan
2012-02-01
When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).
Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan
2015-05-13
This paper introduces a novel afocal optical flow sensor (OFS) system for odometry estimation in indoor robotic navigation. The OFS used in computer optical mouse has been adopted for mobile robots because it is not affected by wheel slippage. Vertical height variance is thought to be a dominant factor in systematic error when estimating moving distances in mobile robots driving on uneven surfaces. We propose an approach to mitigate this error by using an afocal (infinite effective focal length) system. We conducted experiments in a linear guide on carpet and three other materials with varying sensor heights from 30 to 50 mm and a moving distance of 80 cm. The same experiments were repeated 10 times. For the proposed afocal OFS module, a 1 mm change in sensor height induces a 0.1% systematic error; for comparison, the error for a conventional fixed-focal-length OFS module is 14.7%. Finally, the proposed afocal OFS module was installed on a mobile robot and tested 10 times on a carpet for distances of 1 m. The average distance estimation error and standard deviation are 0.02% and 17.6%, respectively, whereas those for a conventional OFS module are 4.09% and 25.7%, respectively.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world. PMID:26528176
Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery.
Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell
2011-06-01
This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.
Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery
Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell
2013-01-01
This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information. PMID:24398557
Semiautonomous teleoperation system with vision guidance
NASA Astrophysics Data System (ADS)
Yu, Wai; Pretlove, John R. G.
1998-12-01
This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.