Science.gov

Sample records for mobile robot speed

  1. Cooperating mobile robots

    DOEpatents

    Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.; Byrne, Raymond H.

    2004-02-03

    A miniature mobile robot provides a relatively inexpensive mobile robot. A mobile robot for searching an area provides a way for multiple mobile robots in cooperating teams. A robotic system with a team of mobile robots communicating information among each other provides a way to locate a source in cooperation. A mobile robot with a sensor, a communication system, and a processor, provides a way to execute a strategy for searching an area.

  2. Guarded Motion for Mobile Robots

    SciTech Connect

    2005-03-30

    The Idaho National Laboratory (INL) has created codes that ensure that a robot will come to a stop at a precise, specified distance from any obstacle regardless of the robot's initial speed, its physical characteristics, and the responsiveness of the low-level motor control schema. This Guarded Motion for Mobile Robots system iteratively adjusts the robot's action in response to information about the robot's environment.

  3. Tandem mobile robot system

    DOEpatents

    Buttz, James H.; Shirey, David L.; Hayward, David R.

    2003-01-01

    A robotic vehicle system for terrain navigation mobility provides a way to climb stairs, cross crevices, and navigate across difficult terrain by coupling two or more mobile robots with a coupling device and controlling the robots cooperatively in tandem.

  4. Teleautonomous guidance for mobile robots

    NASA Technical Reports Server (NTRS)

    Borenstein, J.; Koren, Y.

    1990-01-01

    Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.

  5. Analysis of the kinematic characteristics of a high-speed parallel robot with Schönflies motion: Mobility, kinematics, and singularity

    NASA Astrophysics Data System (ADS)

    Xie, Fugui; Liu, Xin-Jun

    2016-06-01

    This study introduces a high-speed parallel robot with Schönflies motion. This robot exhibits a promising prospect in realizing high-speed pick-andplace manipulation for packaging production lines. The robot has four identical limbs and a single platform. Its compact structure and single-platform concept provides this robot with good dynamic response potential. A line graph method based on Grassmann line geometry is used to investigate the mobility characteristics of the proposed robot. A generalized Blanding rule is also introduced into this procedure to realize mutual conversion between the line graphs for motions and constraints. Subsequently, the inverse kinematics is derived, and the singularity issue of the robot is investigated using both qualitative and quantitative approaches. Input and output transmission singularity indices are defined based on the reciprocal product in screw theory and the virtual coefficient by considering motion/force transmission performance. Thereafter, the singular loci of the proposed robot with specific geometric parameters are derived. The mobility analysis, inverse kinematics modeling, and singularity analysis conducted in this study are helpful in developing the robot.

  6. Autonomous mobile robot teams

    NASA Technical Reports Server (NTRS)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  7. Segway robotic mobility platform

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa G.; Morrell, John; Mullens, Katherine D.; Burmeister, Aaron B.; Miles, Susan; Farrington, Nathan; Thomas, Kari M.; Gage, Douglas W.

    2004-12-01

    The Segway Robotic Mobility Platform (RMP) is a new mobile robotic platform based on the self-balancing Segway Human Transporter (HT). The Segway RMP is faster, cheaper, and more agile than existing comparable platforms. It is also rugged, has a small footprint, a zero turning radius, and yet can carry a greater payload. The new geometry of the platform presents researchers with an opportunity to examine novel topics, including people-height sensing and actuation modalities. This paper describes the history and development of the platform, its characteristics, and a summary of current research projects involving the platform at various institutions across the United States.

  8. Terrain trafficability characterization with a mobile robot

    NASA Astrophysics Data System (ADS)

    Ojeda, Lauro; Borenstein, Johann; Witus, Gary

    2005-05-01

    Most research on off-road mobile robot sensing focuses on obstacle negotiation, path planning, and position estimation. These issues have conventionally been the foremost factors limiting the performance and speeds of mobile robots. Very little attention has been paid to date to the issue of terrain trafficability, that is, the terrain's ability to support vehicular traffic. Yet, trafficability is of great importance if mobile robots are to reach speeds that human-driven vehicles can reach on rugged terrain. For example, it is obvious that the maximal allowable speed for a turn is lower when driving over sand or wet grass than when driving on packed dirt or asphalt. This paper presents our work on automated real-time characterization of terrain with regard to trafficability for small mobile robots. The two proposed methods can be implemented on skid-steer mobile robots and possibly also on tracked mobile robots. The paper also presents experimental results for each of the two implemented methods.

  9. Integrated mobile robot control

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Thorpe, Charles

    1991-01-01

    This paper describes the structure, implementation, and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation, path specification and tracking, human interfaces, fast communication, and multiple client support. The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Navlab autonomous vehicle. In addition, performance results from positioning and tracking systems are reported and analyzed.

  10. Integrated mobile-robot design

    SciTech Connect

    Kortenkamp, D.; Huber, M.; Cohen, C.; Raschke, U.; Bidlack, C.; Congdon, C.B.; Koss, F.; Weymouth, T.

    1993-08-01

    Ten mobile robots entered the AAAI '92 Robot Competition, held at last year's national conference. Carmel, the University of Michigan entry, won. The competition consisted of three stages. The first stage required roaming a 22[times]22-meter arena while avoiding static and dynamic obstacles; the second involved searching for and visiting 10 objects in the same arena. The obstacles were at least 1.5 meters apart, while the objects were spaced roughly evenly throughout the arena. Visiting was defined as moving to within two robot diameters of the object. The last stage was a timed race to visit three of the objects located earlier and return home. Since the first stage was primarily a subset of the second-stage requirements, and the third-stage implementation was very similar to that of the second, the authors' focus here on the second stage. Carmel (Computer-Aided Robotics for Maintenance, Emergency, and Life support) is based on a commercially available Cybermotion K2A mobile-robot platform. It has a top speed of approximately 800 millimeters per second and moves on three synchronously driven wheels. For sensing, Carmel, has a ring of 24 Polaroid sonar sensors and a single black-and-white charge-coupled-device camera mounted on a rotating table. Carmel has three processors: one controls the drive motors, one fires the sonar ring, and the third, a 486-based PC clone, executes all the high-level modules. The 486 also has a frame grabber for acquiring images. All computation and power are contained on-board.

  11. Mobile robot sense net

    NASA Astrophysics Data System (ADS)

    Konolige, Kurt G.; Gutmann, Steffen; Guzzoni, Didier; Ficklin, Robert W.; Nicewarner, Keith E.

    1999-08-01

    Mobile robot hardware and software is developing to the point where interesting applications for groups of such robots can be contemplated. We envision a set of mobots acting to map and perform surveillance or other task within an indoor environment (the Sense Net). A typical application of the Sense Net would be to detect survivors in buildings damaged by earthquake or other disaster, where human searchers would be put a risk. As a team, the Sense Net could reconnoiter a set of buildings faster, more reliably, and more comprehensibly than an individual mobot. The team, for example, could dynamically form subteams to perform task that cannot be done by individual robots, such as measuring the range to a distant object by forming a long baseline stereo sensor form a pari of mobots. In addition, the team could automatically reconfigure itself to handle contingencies such as disabled mobots. This paper is a report of our current progress in developing the Sense Net, after the first year of a two-year project. In our approach, each mobot has sufficient autonomy to perform several tasks, such as mapping unknown areas, navigating to specific positions, and detecting, tracking, characterizing, and classifying human and vehicular activity. We detail how some of these tasks are accomplished, and how the mobot group is tasked.

  12. An autonomous vision-based mobile robot

    NASA Astrophysics Data System (ADS)

    Baumgartner, Eric Thomas

    This dissertation describes estimation and control methods for use in the development of an autonomous mobile robot for structured environments. The navigation of the mobile robot is based on precise estimates of the position and orientation of the robot within its environment. The extended Kalman filter algorithm is used to combine information from the robot's drive wheels with periodic observations of small, wall-mounted, visual cues to produce the precise position and orientation estimates. The visual cues are reliably detected by at least one video camera mounted on the mobile robot. Typical position estimates are accurate to within one inch. A path tracking algorithm is also developed to follow desired reference paths which are taught by a human operator. Because of the time-independence of the tracking algorithm, the speed that the vehicle travels along the reference path is specified independent from the tracking algorithm. The estimation and control methods have been applied successfully to two experimental vehicle systems. Finally, an analysis of the linearized closed-loop control system is performed to study the behavior and the stability of the system as a function of various control parameters.

  13. Learning for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A

  14. Adaptive Behavior for Mobile Robots

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2009-01-01

    The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.

  15. ARIES: A mobile robot inspector

    SciTech Connect

    Byrd, J.S.

    1995-12-31

    ARIES (Autonomous Robotic Inspection Experimental System) is a mobile robot inspection system being developed for the Department of Energy (DOE) to survey and inspect drums containing mixed and low-level radioactive waste stored in warehouses at DOE facilities. The drums are typically stacked four high and arranged in rows with three-foot aisle widths. The robot will navigate through the aisles and perform an autonomous inspection operation, typically performed by a human operator. It will make real-time decisions about the condition of the drums, maintain a database of pertinent information about each drum, and generate reports.

  16. Mobile robot for hazardous environments

    SciTech Connect

    Bains, N.

    1995-12-31

    This paper describes the architecture and potential applications of the autonomous robot for a known environment (ARK). The ARK project has developed an autonomous mobile robot that can move around by itself in a complicated nuclear environment utilizing a number of sensors for navigation. The primary sensor system is computer vision. The ARK has the intelligence to determine its position utilizing {open_quotes}natural landmarks,{close_quotes} such as ordinary building features at any point along its path. It is this feature that gives ARK its uniqueness to operate in an industrial type of environment. The prime motivation to develop ARK was the potential application of mobile robots in radioactive areas within nuclear generating stations and for nuclear waste sites. The project budget is $9 million over 4 yr and will be completed in October 1995.

  17. [Mobile autonomous robots-Possibilities and limits].

    PubMed

    Maehle, E; Brockmann, W; Walthelm, A

    2002-02-01

    Besides industrial robots, which today are firmly established in production processes, service robots are becoming more and more important. They shall provide services for humans in different areas of their professional and everyday environment including medicine. Most of these service robots are mobile which requires an intelligent autonomous behaviour. After characterising the different kinds of robots the relevant paradigms of intelligent autonomous behaviour for mobile robots are critically discussed in this paper and illustrated by three concrete examples of robots realized in Lübeck. In addition a short survey of actual kinds of surgical robots as well as an outlook to future developments is given.

  18. Mobile Surveillance and Monitoring Robots

    SciTech Connect

    Kimberly, Howard R.; Shipers, Larry R.

    1999-07-14

    Long-term nuclear material storage will require in-vault data verification, sensor testing, error and alarm response, inventory, and maintenance operations. System concept development efforts for a comprehensive nuclear material management system have identified the use of a small flexible mobile automation platform to perform these surveillance and maintenance operations. In order to have near-term wide-range application in the Complex, a mobile surveillance system must be small, flexible, and adaptable enough to allow retrofit into existing special nuclear material facilities. The objective of the Mobile Surveillance and Monitoring Robot project is to satisfy these needs by development of a human scale mobile robot to monitor the state of health, physical security and safety of items in storage and process; recognize and respond to alarms, threats, and off-normal operating conditions; and perform material handling and maintenance operations. The system will integrate a tool kit of onboard sensors and monitors, maintenance equipment and capability, and SNL developed non-lethal threat response technology with the intelligence to identify threats and develop and implement first response strategies for abnormal signals and alarm conditions. System versatility will be enhanced by incorporating a robot arm, vision and force sensing, robust obstacle avoidance, and appropriate monitoring and sensing equipment.

  19. Mobile autonomous robotic apparatus for radiologic characterization

    DOEpatents

    Dudar, Aed M.; Ward, Clyde R.; Jones, Joel D.; Mallet, William R.; Harpring, Larry J.; Collins, Montenius X.; Anderson, Erin K.

    1999-01-01

    A mobile robotic system that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console.

  20. Mobile autonomous robotic apparatus for radiologic characterization

    DOEpatents

    Dudar, A.M.; Ward, C.R.; Jones, J.D.; Mallet, W.R.; Harpring, L.J.; Collins, M.X.; Anderson, E.K.

    1999-08-10

    A mobile robotic system is described that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console. 4 figs.

  1. Certainty grids for mobile robots

    NASA Technical Reports Server (NTRS)

    Moravec, H. P.

    1987-01-01

    A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.

  2. Communications Systems for Mobile Robotics

    SciTech Connect

    Futterman, J A; Pao, H

    2003-12-08

    Performance Confirmation is the activity by which the Yucca Mountain Project confirms that the engineered and natural containment barriers of this national nuclear waste repository are performing as predicted, so that an eventual decision to close the repository can be made. This activity involves systems that must be inspected and, in some cases, serviced by mobile robots. This paper discusses systems for underground mobile robot communications, including requirements, environments, options, issues, and down-select criteria. We reviewed a variety of systems, including Slotted Waveguide, Powerline Carrier, Leaky Feeder, Photonic Bandgap Fiber, Free-Space Optics, Millimeter Waves, Terahertz Systems, and RF Systems (including IEEE 802.11 a,b, and g, and Ultra-Wideband radio).

  3. Autonomous mobile robots: Vehicles with cognitive control

    SciTech Connect

    Meystel, A.

    1987-01-01

    This book explores a new rapidly developing area of robotics. It describes the state-of-the-art intelligence control, applied machine intelligence, and research and initial stages of manufacturing of autonomous mobile robots. A complete account of the theoretical and experimental results obtained during the last two decades together with some generalizations on Autonomous Mobile Systems are included in this book. Contents: Introduction; Requirements and Specifications; State-of-the-art in Autonomous Mobile Robots Area; Structure of Intelligent Mobile Autonomous System; Planner, Navigator; Pilot; Cartographer; Actuation Control; Computer Simulation of Autonomous Operation; Testing the Autonomous Mobile Robot; Conclusions; Bibliography.

  4. Mobile robotics research at Sandia National Laboratories

    SciTech Connect

    Morse, W.D.

    1998-09-01

    Sandia is a National Security Laboratory providing scientific and engineering solutions to meet national needs for both government and industry. As part of this mission, the Intelligent Systems and Robotics Center conducts research and development in robotics and intelligent machine technologies. An overview of Sandia`s mobile robotics research is provided. Recent achievements and future directions in the areas of coordinated mobile manipulation, small smart machines, world modeling, and special application robots are presented.

  5. Two-Armed, Mobile, Sensate Research Robot

    NASA Technical Reports Server (NTRS)

    Engelberger, J. F.; Roberts, W. Nelson; Ryan, David J.; Silverthorne, Andrew

    2004-01-01

    The Anthropomorphic Robotic Testbed (ART) is an experimental prototype of a partly anthropomorphic, humanoid-size, mobile robot. The basic ART design concept provides for a combination of two-armed coordination, tactility, stereoscopic vision, mobility with navigation and avoidance of obstacles, and natural-language communication, so that the ART could emulate humans in many activities. The ART could be developed into a variety of highly capable robotic assistants for general or specific applications. There is especially great potential for the development of ART-based robots as substitutes for live-in health-care aides for home-bound persons who are aged, infirm, or physically handicapped; these robots could greatly reduce the cost of home health care and extend the term of independent living. The ART is a fully autonomous and untethered system. It includes a mobile base on which is mounted an extensible torso topped by a head, shoulders, and two arms. All subsystems of the ART are powered by a rechargeable, removable battery pack. The mobile base is a differentially- driven, nonholonomic vehicle capable of a speed >1 m/s and can handle a payload >100 kg. The base can be controlled manually, in forward/backward and/or simultaneous rotational motion, by use of a joystick. Alternatively, the motion of the base can be controlled autonomously by an onboard navigational computer. By retraction or extension of the torso, the head height of the ART can be adjusted from 5 ft (1.5 m) to 6 1/2 ft (2 m), so that the arms can reach either the floor or high shelves, or some ceilings. The arms are symmetrical. Each arm (including the wrist) has a total of six rotary axes like those of the human shoulder, elbow, and wrist joints. The arms are actuated by electric motors in combination with brakes and gas-spring assists on the shoulder and elbow joints. The arms are operated under closed-loop digital control. A receptacle for an end effector is mounted on the tip of the wrist and

  6. Portable control device for networked mobile robots

    DOEpatents

    Feddema, John T.; Byrne, Raymond H.; Bryan, Jon R.; Harrington, John J.; Gladwell, T. Scott

    2002-01-01

    A handheld control device provides a way for controlling one or multiple mobile robotic vehicles by incorporating a handheld computer with a radio board. The device and software use a personal data organizer as the handheld computer with an additional microprocessor and communication device on a radio board for use in controlling one robot or multiple networked robots.

  7. Modular Track System For Positioning Mobile Robots

    NASA Technical Reports Server (NTRS)

    Miller, Jeff

    1995-01-01

    Conceptual system for positioning mobile robotic manipulators on large main structure includes modular tracks and ancillary structures assembled easily along with main structure. System, called "tracked robotic location system" (TROLS), originally intended for application to platforms in outer space, but TROLS concept might also prove useful on Earth; for example, to position robots in factories and warehouses. T-cross-section rail keeps mobile robot on track. Bar codes mark locations along track. Each robot equipped with bar-code-recognizing circuitry so it quickly finds way to assigned location.

  8. Face feature processor on mobile service robot

    NASA Astrophysics Data System (ADS)

    Ahn, Ho Seok; Park, Myoung Soo; Na, Jin Hee; Choi, Jin Young

    2005-12-01

    In recent years, many mobile service robots have been developed. These robots are different from industrial robots. Service robots were confronted to unexpected changes in the human environment. So many capabilities were needed to service mobile robot, for example, the capability to recognize people's face and voice, the capability to understand people's conversation, and the capability to express the robot's thinking etc. This research considered face detection, face tracking and face recognition from continuous camera image. For face detection module, it used CBCH algorithm using openCV library from Intel Corporation. For face tracking module, it used the fuzzy controller to control the pan-tilt camera movement smoothly with face detection result. A PCA-FX, which adds class information to PCA, was used for face recognition module. These three procedures were called face feature processor, which were implemented on mobile service robot OMR to verify.

  9. Vision-guided heterogeneous mobile robot docking

    NASA Astrophysics Data System (ADS)

    Spofford, John R.; Blitch, John; Klarquist, William N.; Murphy, Robin R.

    1999-08-01

    Teams of heterogeneous mobile robots are a key aspect of future unmanned system for operations in complex and dynamic urban environments, such as that envisioned by DARPA's Tactical Mobile Robotics program. One examples of an interaction among such team members is the docking of small robot of limited sensory and processing capability with a larger, more capable robot. Applications for such docking include the transfer of power, data, and materia, as well as physically combined maneuver or manipulation. A two-robot system is considered in this paper. The smaller 'throwable' robot contains a video camera capable of imaging the larger 'packable' robot and transmitting the imagery. The packable robot can both sense the throwable robot through an onboard camera, as well as sense itself through the throwable robot's transmitted video, and is capable of processing imagery from either source. This paper describes recent results in the development of control and sensing strategies for automatic mid-range docking of these two robots. Decisions addressed include the selection of which robot's image sensor to use and which robot to maneuver. Initial experimental results are presented for docking using sensor data from each robot.

  10. Algorithmic approach to intelligent robot mobility

    SciTech Connect

    Kauffman, S.

    1983-05-01

    This paper presents Sutherland's algorithm, plus an alternative algorithm, which allows mobile robots to move about intelligently in environments resembling the rooms and hallways in which we move around. The main hardware requirements for a robot to use the algorithms presented are mobility and an ability to sense distances with some type of non-contact scanning device. This article does not discuss the actual robot construction. The emphasis is on heuristics and algorithms. 1 reference.

  11. Mobile robot localization using sonar.

    PubMed

    Drumheller, M

    1987-02-01

    This correspondence describes a method by which range data from a sonar rangefinder can be used to determine the two-dimensional position and orientation of a mobile robot inside a room. The plan of the room is modeled as a list of segments indicating the positions of walls. The algorithm works by correlating straight segments in the range data against the room model, then eliminating implausible configurations using the sonar barrier test, which exploits physical constraints on sonar data. The approach is extremely tolerant of noise and clutter. Transient objects such as furniture and people need not be included in the room model, and very noisy, low-resolution sensors can be used. The algorithm's performance is demonstrated using a Polaroid Ultrasonic Rangefinder.

  12. Hierarchical modelling of mobile, seeing robots

    NASA Technical Reports Server (NTRS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-01-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  13. Real Time System Architecture For A Mobile Robot

    NASA Astrophysics Data System (ADS)

    Sharma, Uma K.; McTamaney, Louis S.

    1987-01-01

    An intelligent mobile robot must be able to accept a mission statement and constraints, plan its actions, execute its plans, perceive and adapt to its environment, and report its successes and failures. In this paper we describe a modular system architecture for such a complex mobile robot system. On-board versus off-board processing is a key system-level issue. We have selected off-board processing because the anticipated computer quantity, size, power requirement, and lack of robustness made on-board processing impractical if not impossible. Our system includes a transportable command center and a computer-controllable M113 armored personnel carrier, our mobile robot. The command center contains communication and computer hardware necessary for receiving and processing robot motion and sensor information, and for generating and transmitting mobility and sensor commands in real time to the robot. All control and status data transmission, between the robot and the command center, is accomplished through microwave links using a wide band, auto-tracking antenna. Under development since 1982, this system has demonstrated the capability of mission and route planning with execution at 8 km/hr, obstacle detection and avoidance at 15 km/hr, autonomous road following at 24 km/hr, and a remotely managed route reconnaissance mission at vehicle speeds of up to 40 km/hr.

  14. Mobile robot vehicles for physical security

    SciTech Connect

    McGovern, D.E.

    1987-07-01

    A fleet of vehicles is being developed and maintained by Sandia National Labs for studies in remote control and autonomous operation. These vehicles range from modified commercial vehicles to specially constructed mobile platforms and are utilized as test beds for developing concepts in the application of robotics to interior and exterior physical security. Actuators control the vehicle speed, brakes, and steering through manual input from a remote driving station or through some level of digital computer control. On-board processing may include simple vehicle control functions or may allow for unmanned, autonomous operation. communication links are provided for digital communication between control computers, television transmission for vehicle vision, and voice for local control. With these vehicles, SNL can develop, test, and evaluate sensors, processing requirements, various methods of actuator implementation, operator controlled feedback requirements, and vehicle operations. A description of the major features and uses for each of the vehicles in the fleet is provided.

  15. A dragline-forming mobile robot inspired by spiders.

    PubMed

    Wang, Liyu; Culha, Utku; Iida, Fumiya

    2014-03-01

    Mobility of wheeled or legged machines can be significantly increased if they are able to move from a solid surface into a three-dimensional space. Although that may be achieved by addition of flying mechanisms, the payload fraction will be the limiting factor in such hybrid mobile machines for many applications. Inspired by spiders producing draglines to assist locomotion, the paper proposes an alternative mobile technology where a robot achieves locomotion from a solid surface into a free space. The technology resembles the dragline production pathway in spiders to a technically feasible degree and enables robots to move with thermoplastic spinning of draglines. As an implementation, a mobile robot has been prototyped with thermoplastic adhesives as source material of the draglines. Experimental results show that a dragline diameter range of 1.17-5.27 mm was achievable by the 185 g mobile robot in descending locomotion from the solid surface of a hanging structure with a power consumption of 4.8 W and an average speed of 5.13 cm min(-1). With an open-loop controller consisting of sequences of discrete events, the robot has demonstrated repeatable dragline formation with a relative deviation within -4% and a length close to the metre scale. PMID:24434546

  16. Automatic learning by an autonomous mobile robot

    SciTech Connect

    de Saussure, G.; Spelt, P.F.; Killough, S.M.; Pin, F.G.; Weisbin, C.R.

    1989-01-01

    This paper describes recent research in automatic learning by the autonomous mobile robot HERMIES-IIB at the Center for Engineering Systems Advanced Research (CESAR). By acting on the environment and observing the consequences during a set of training examples, the robot learns a sequence of successful manipulations on a simulated control panel. The robot learns to classify panel configurations in order to deal with new configurations that are not part of the original training set. 5 refs., 2 figs.

  17. Employing Omnidirectional Visual Control for Mobile Robotics.

    ERIC Educational Resources Information Center

    Wright, J. R., Jr.; Jung, S.; Steplight, S.; Wright, J. R., Sr.; Das, A.

    2000-01-01

    Describes projects using conventional technologies--incorporation of relatively inexpensive visual control with mobile robots using a simple remote control vehicle platform, a camera, a mirror, and a computer. Explains how technology teachers can apply them in the classroom. (JOW)

  18. Steering a mobile robot in real time

    NASA Astrophysics Data System (ADS)

    Chuah, Mei C.; Fennema, Claude L., Jr.

    1994-10-01

    Using computer vision for mobile robot navigation has been of interest since the 1960s. This interest is evident in even the earliest robot projects: at SRI International (`Shakey') and at the Stanford University (`Stanford Cart'). These pioneering projects provided a foundation for late work but fell far short of providing real time solutions. Since the mid 1980s, the ARPA sponsored ALV and UGV projects have established a need for real time navigation. To achieve the necessary speed, some researchers have focused on building faster hardware; others have turned to the use of new computational architectures, such as neural nets. The work described in this paper uses another approach that has become known as `perceptual servoing.' Previously reported results show that perceptual servoing is both fast and accurate when used to steer vehicles equipped with precise odometers. When the instrumentation on the vehicle does not give precise measurements of distance traveled, as could be the case for a vehicle traveling on ice or mud, new techniques are required to accommodate the reduced ability to make accurate predictions about motion and control. This paper presents a method that computes estimates of distance traveled using landmarks and path information. The new method continues to perform in real time using modest computational facilities, and results demonstrate the effects of the new implementation on steering accuracy.

  19. Robotic vehicle with multiple tracked mobility platforms

    SciTech Connect

    Salton, Jonathan R.; Buttz, James H.; Garretson, Justin; Hayward, David R.; Hobart, Clinton G.; Deuel, Jr., Jamieson K.

    2012-07-24

    A robotic vehicle having two or more tracked mobility platforms that are mechanically linked together with a two-dimensional coupling, thereby forming a composite vehicle of increased mobility. The robotic vehicle is operative in hazardous environments and can be capable of semi-submersible operation. The robotic vehicle is capable of remote controlled operation via radio frequency and/or fiber optic communication link to a remote operator control unit. The tracks have a plurality of track-edge scallop cut-outs that allow the tracks to easily grab onto and roll across railroad tracks, especially when crossing the railroad tracks at an oblique angle.

  20. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  1. Extensible Hardware Architecture for Mobile Robots

    NASA Technical Reports Server (NTRS)

    Park, Eric; Kobayashi, Linda; Lee, Susan Y.

    2005-01-01

    The Intelligent Robotics Group at NASA Ames Research Center has developed a new mobile robot hardware architecture designed for extensibility and reconfigurability. Currently implemented on the k9 rover. and won to be integrated onto the K10 series of human-robot collaboration research robots, this architecture allows for rapid changes in instrumentation configuration and provides a high degree of modularity through a synergistic mix of off-the-shelf and custom designed components, allowing eased transplantation into a wide vane6 of mobile robot platforms. A component level overview of this architecture is presented along with a description of the changes required for implementation on K10 , followed by plans for future work.

  2. Defining proprioceptive behaviors for autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Overholt, James L.; Hudas, Greg R.; Gerhart, Grant R.

    2002-07-01

    Proprioception is a sense of body position and movement that supports the control of many automatic motor functions such as posture and locomotion. This concept, normally relegated to the fields of neural physiology and kinesiology, is being utilized in the field of unmanned mobile robotics. This paper looks at developing proprioceptive behaviors for use in controlling an unmanned ground vehicle. First, we will discuss the field of behavioral control of mobile robots. Next, a discussion of proprioception and the development of proprioceptive sensors will be presented. We will then focus on the development of a unique neural-fuzzy architecture that will be used to incorporate the control behaviors coming directly from the proprioceptive sensors. Finally we will present a simulation experiment where a simple multi-sensor robot, utilizing both external and proprioceptive sensors, is presented with the task of navigating an unknown terrain to a known target position. Results of the mobile robot utilizing this unique fusion methodology will be discussed.

  3. Autonomous Mobile Robot That Can Read

    NASA Astrophysics Data System (ADS)

    Létourneau, Dominic; Michaud, François; Valin, Jean-Marc

    2004-12-01

    The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual messages to read, potentially having to compensate for its viewpoint of the message, and use the limited onboard processing capabilities to decode the message. The robot also has to deal with variations in lighting conditions. In this paper, we present our approach demonstrating that it is feasible for an autonomous mobile robot to read messages of specific colors and font in real-world conditions. We outline the constraints under which the approach works and present results obtained using a Pioneer 2 robot equipped with a Pentium 233 MHz and a Sony EVI-D30 pan-tilt-zoom camera.

  4. Tandem robot control system and method for controlling mobile robots in tandem

    DOEpatents

    Hayward, David R.; Buttz, James H.; Shirey, David L.

    2002-01-01

    A control system for controlling mobile robots provides a way to control mobile robots, connected in tandem with coupling devices, to navigate across difficult terrain or in closed spaces. The mobile robots can be controlled cooperatively as a coupled system in linked mode or controlled individually as separate robots.

  5. Application of mobile robot localization using sonar

    SciTech Connect

    Byrd, J.S.; Hill, K.H.

    1994-12-31

    A sonar-based mobile robot has been developed for inspection of low-level radioactive waste drums. An algorithm was developed which gives the robot the ability to refence itself to cylindrical objects. The drum-following algorithm has been demonstrated in 4-ft drum aisles at the Mobile Robotics Laboratory at the University of South Carolina. The final version has proven to be robust through extensive long-term navigation tests. Future enhancements will employ a narrow-aisle version of the Nav-master to allow navigation in 3-ft drum aisles. The final version of the inspection robot will include the drum-navigation algorithm as a low-level primitive instruction. The onboard management system will be dedicated to more of the high-level functions, such as planning, now provided by the offboard supervisory system.

  6. Evolutionary neurocontrollers for autonomous mobile robots.

    PubMed

    Floreano, D; Mondada, F

    1998-10-01

    In this article we describe a methodology for evolving neurocontrollers of autonomous mobile robots without human intervention. The presentation, which spans from technological and methodological issues to several experimental results on evolution of physical mobile robots, covers both previous and recent work in the attempt to provide a unified picture within which the reader can compare the effects of systematic variations on the experimental settings. After describing some key principles for building mobile robots and tools suitable for experiments in adaptive robotics, we give an overview of different approaches to evolutionary robotics and present our methodology. We start reviewing two basic experiments showing that different environments can shape very different behaviours and neural mechanisms under very similar selection criteria. We then address the issue of incremental evolution in two different experiments from the perspective of changing environments and robot morphologies. Finally, we investigate the possibility of evolving plastic neurocontrollers and analyse an evolved neurocontroller that relies on fast and continuously changing synapses characterized by dynamic stability. We conclude by reviewing the implications of this methodology for engineering, biology, cognitive science and artificial life, and point at future directions of research.

  7. Intelligent modular manipulation for mobile robots

    NASA Astrophysics Data System (ADS)

    Culbertson, John

    2008-04-01

    As mobile robots continue to gain acceptance across a variety of applications within the defense and civilian markets, the number of tasks that these robot platforms are expected to accomplish are expanding. Robot operators are asked to do more with the same platforms - from EOD missions to reconnaissance and inspection operations. Due to the fact that a majority of missions are dangerous in nature, it is critical that users are able to make remote adjustments to the systems to ensure that they are kept out of harm's way. An efficient way to expand the capabilities of existing robot platforms, improve the efficiency of robot missions, and to ultimately improve the operator's safety is to integrate JAUS-enabled Intelligent Modular Manipulation payloads. Intelligent Modular Manipulation payloads include both simple and dexterous manipulator arms with plug-and-play end-effector tools that can be changed based on the specific mission. End-effectors that can be swapped down-range provide an added benefit of decreased time-on-target. The intelligence in these systems comes from semi-autonomous mobile manipulation actions that enable the robot operator to perform manipulation task with the touch of a button on the OCU. RE2 is supporting Unmanned Systems Interoperability by utilizing the JAUS standard as the messaging protocol for all of its manipulation systems. Therefore, they can be easily adapted and integrated onto existing JAUS-enabled robot platforms.

  8. Self-awareness in mobile robots

    NASA Astrophysics Data System (ADS)

    Lim, Willie Y.

    1992-02-01

    Self-awareness is the ability for a mobile robot to, on its own, detect and deal with operational abnormalities. Such an ability is needed for the robot to operate robustly in an unpredictable environment. For the robot to be self-aware it must be capable of sensing its own internal state (such as detecting hardware failures or a drop in battery charge level), reacting quickly to such internal inputs, and infer the implications of the resulting actions. At the lowest level, the ability to detect hardware failures enables a reactive robot to substitute functionally equivalent behaviors for those that no longer work because of the failures. If the failures are serious, the robot should be able to abort the current and initiate a new task/mission to correct the problem. At the highest level, self-awareness would give the robot a sense of its `well-being,' limitations, capabilities, and needs. This paper describes how the self-awareness ability is being implemented on a mobile robot, called SmartyCat, that uses high level reasoning to coordinate and specialize its low level reactive behaviors to the mission goal. The multilevel mechanisms, ranging from behavior substitution to mission replanning, needed for self- awareness are discussed.

  9. Mobile robot and mobile manipulator research towards ASTM standards development

    NASA Astrophysics Data System (ADS)

    Bostelman, Roger; Hong, Tsai; Legowik, Steven

    2016-05-01

    Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems.

  10. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1994-06-28

    An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures.

  11. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.

    1994-01-01

    An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.

  12. A mobile autonomous robot for radiological surveys

    SciTech Connect

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1992-01-01

    The Robotics Development Group at the Savannah River Site is developing an autonomous robot (SIMON) to perform radiological surveys of potentially contaminated floors. The robot scans floors at a speed of one-inch/second and stops, sounds an alarm, and flashes lights when contamination in a certain area is detected. The contamination of interest here is primarily alpha and beta-gamma. The robot, a Cybermotion K2A base, is radio controlled, uses dead reckoning to determine vehicle position, and docks with a charging station to replenish its batteries and calibrate its position. It uses an ultrasonic ranging system for collision avoidance. In addition, two safety bumpers located in the front and the back of the robot will stop the robots motion when they are depressed. Paths for the robot are preprogrammed and the robots motion can be monitored on a remote screen which shows a graphical map of the environment. The radiation instrument being used is an Eberline RM22A monitor. This monitor is microcomputer based with a serial I/0 interface for remote operation. Up to 30 detectors may be configured with the RM22A.

  13. A mobile autonomous robot for radiological surveys

    SciTech Connect

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1992-10-01

    The Robotics Development Group at the Savannah River Site is developing an autonomous robot (SIMON) to perform radiological surveys of potentially contaminated floors. The robot scans floors at a speed of one-inch/second and stops, sounds an alarm, and flashes lights when contamination in a certain area is detected. The contamination of interest here is primarily alpha and beta-gamma. The robot, a Cybermotion K2A base, is radio controlled, uses dead reckoning to determine vehicle position, and docks with a charging station to replenish its batteries and calibrate its position. It uses an ultrasonic ranging system for collision avoidance. In addition, two safety bumpers located in the front and the back of the robot will stop the robots motion when they are depressed. Paths for the robot are preprogrammed and the robots motion can be monitored on a remote screen which shows a graphical map of the environment. The radiation instrument being used is an Eberline RM22A monitor. This monitor is microcomputer based with a serial I/0 interface for remote operation. Up to 30 detectors may be configured with the RM22A.

  14. Random issues in workspace analysis for a mobile robot

    NASA Astrophysics Data System (ADS)

    Stǎnescu, Tony; Dolga, Valer; Mondoc, Alina

    2014-12-01

    Evolution of the mobile robot is currently characterized by multiple applications in dynamic workspaces and low initial knowledge. In this paper presents aspects of approaching random processes of evolution of a mobile robot in an unstructured environment . The experimental results are used for modeling an infrared sensor (integrated in the mobile robot structure) and to assess the probability of locating obstacles in the environment.

  15. Simple adaptive tracking control for mobile robots

    NASA Astrophysics Data System (ADS)

    Bobtsov, Alexey; Faronov, Maxim; Kolyubin, Sergey; Pyrkin, Anton

    2014-12-01

    The problem of simple adaptive and robust control is studied for the case of parametric and dynamic dimension uncertainties: only the maximum possible relative degree of the plant model is known. The control approach "consecutive compensator" is investigated. To illustrate the efficiency of proposed approach an example with the mobile robot motion control using computer vision system is considered.

  16. Mobility of lightweight robots over snow

    NASA Astrophysics Data System (ADS)

    Lever, James H.; Shoop, Sally A.

    2006-05-01

    Snowfields are challenging terrain for lightweight (<50 kg) unmanned ground vehicles. Deep sinkage, high snowcompaction resistance, traction loss while turning and ingestion of snow into the drive train can cause immobility within a few meters of travel. However, for suitably designed vehicles, deep snow offers a smooth, uniform surface that can obliterate obstacles. Key requirements for good over-snow mobility are low ground pressure, large clearance relative to vehicle size and a drive system that tolerates cohesive snow. A small robot will invariably encounter deep snow relative to its ground clearance. Because a single snowstorm can easily deposit 30 cm of fresh snow, robots with ground clearance less than about 10 cm must travel over the snow rather than gain support from the underlying ground. This can be accomplished using low-pressure tracks (< 1.5 kPa). Even still, snow-compaction resistance can exceed 20% of vehicle weight. Also, despite relatively high traction coefficients for low track pressures, differential or skid steering is difficult because the outboard track can easily break traction as the vehicle attempts to turn against the snow. Short track lengths (relative to track separation) or coupled articulated robots offer steering solutions for deep snow. This paper presents preliminary guidance to design lightweight robots for good mobility over snow based on mobility theory and tests of PackBot, Talon and SnoBot, a custom-designed research robot. Because many other considerations constrain robot designs, this guidance can help with development of winterization kits to improve the over-snow performance of existing robots.

  17. Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation

    PubMed Central

    Masmoudi, Mohamed Slim; Masmoudi, Mohamed

    2016-01-01

    This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path.

  18. Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation

    PubMed Central

    Masmoudi, Mohamed Slim; Masmoudi, Mohamed

    2016-01-01

    This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path. PMID:27688748

  19. Autonomous mobile robot research using the HERMIES-III robot

    SciTech Connect

    Pin, F.G.; Beckerman, M.; Spelt, P.F.; Robinson, J.T.; Weisbin, C.R.

    1989-01-01

    This paper reports on the status and future directions in the research, development and experimental validation of intelligent control techniques for autonomous mobile robots using the HERMIES-III robot at the Center for Engineering Systems Advanced research (CESAR) at Oak Ridge National Laboratory (ORNL). HERMIES-III is the fourth robot in a series of increasingly more sophisticated and capable experimental test beds developed at CESAR. HERMIES-III is comprised of a battery powered, onmi-directional wheeled platform with a seven degree-of-freedom manipulator arm, video cameras, sonar range sensors, laser imaging scanner and a dual computer system containing up to 128 NCUBE nodes in hypercube configuration. All electronics, sensors, computers, and communication equipment required for autonomous operation of HERMIES-III are located on board along with sufficient battery power for three to four hours of operation. The paper first provides a more detailed description of the HERMIES-III characteristics, focussing on the new areas of research and demonstration now possible at CESAR with this new test-bed. The initial experimental program is then described with emphasis placed on autonomous performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES- III). The paper concludes with a discussion of the integration problems and safety considerations necessarily arising from the set-up of an experimental program involving human-scale, multi-autonomous mobile robots performance. 10 refs., 3 figs.

  20. Robust performance of multiple tasks by a mobile robot

    NASA Technical Reports Server (NTRS)

    Beckerman, Martin; Barnett, Deanna L.; Dickens, Mike; Weisbin, Charles R.

    1989-01-01

    While there have been many successful mobile robot experiments, only a few papers have addressed issues pertaining to the range of applicability, or robustness, of robotic systems. The purpose of this paper is to report results of a series of benchmark experiments done to determine and quantify the robustness of an integrated hardware and software system of a mobile robot.

  1. Airborne Chemical Sensing with Mobile Robots

    PubMed Central

    Lilienthal, Achim J.; Loutfi, Amy; Duckett, Tom

    2006-01-01

    Airborne chemical sensing with mobile robots has been an active research area since the beginning of the 1990s. This article presents a review of research work in this field, including gas distribution mapping, trail guidance, and the different subtasks of gas source localisation. Due to the difficulty of modelling gas distribution in a real world environment with currently available simulation techniques, we focus largely on experimental work and do not consider publications that are purely based on simulations.

  2. Lunar surface exploration using mobile robots

    NASA Astrophysics Data System (ADS)

    Nishida, Shin-Ichiro; Wakabayashi, Sachiko

    2012-06-01

    A lunar exploration architecture study is being carried out by space agencies. JAXA is carrying out research and development of a mobile robot (rover) to be deployed on the lunar surface for exploration and outpost construction. The main target areas for outpost construction and lunar exploration are mountainous zones. The moon's surface is covered by regolith. Achieving a steady traversal of such irregular terrain constitutes the major technical problem for rovers. A newly developed lightweight crawler mechanism can effectively traverse such irregular terrain because of its low contact force with the ground. This fact was determined on the basis of the mass and expected payload of the rover. This paper describes a plan for Japanese lunar surface exploration using mobile robots, and presents the results of testing and analysis needed in their development. This paper also gives an overview of the lunar exploration robot to be deployed in the SELENE follow-on mission, and the composition of its mobility, navigation, and control systems.

  3. Development of a mobile robot for the 1995 AUVS competition

    NASA Astrophysics Data System (ADS)

    Matthews, Bradley O.; Ruthemeyer, Michael A.; Perdue, David; Hall, Ernest L.

    1995-12-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors systems. The speed and steering control are supervised by a 486 computer through a 3-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. The is micro-controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system, where even computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected through a commercial tracking device, communicating with the computer the X,Y coordinates of the lane marker. Testing of these systems yielded positive results by showing that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous controller applicable for any mobile vehicle with only minor adaptations.

  4. Vision-based position measurement system for indoor mobile robots

    SciTech Connect

    Schreiber, M.J.; Dickerson, S.

    1994-12-31

    This paper discusses a stand-alone position measurement system for mobile nuclear waste management robots traveling in warehouses. The task is to provide two-dimensional position information to help the automated guided vehicle (AGV) guide itself along the aisle`s centerline and mark the location of defective barrels containing low-level radiation. The AGV is 0.91 m wide and must travel along straight aisles 1.12 m wide and up to 36 m long. Radioactive testing limits the AGV`s speed to 25 mm/s. The design objectives focus on cost, power consumption, accuracy, and robustness.

  5. Brain Computer Interface based robotic rehabilitation with online modification of task speed.

    PubMed

    Sarac, Mine; Koyas, Ela; Erdogan, Ahmetcan; Cetin, Mujdat; Patoglu, Volkan

    2013-06-01

    We present a systematic approach that enables online modification/adaptation of robot assisted rehabilitation exercises by continuously monitoring intention levels of patients utilizing an electroencephalogram (EEG) based Brain-Computer Interface (BCI). In particular, we use Linear Discriminant Analysis (LDA) to classify event-related synchronization (ERS) and desynchronization (ERD) patterns associated with motor imagery; however, instead of providing a binary classification output, we utilize posterior probabilities extracted from LDA classifier as the continuous-valued outputs to control a rehabilitation robot. Passive velocity field control (PVFC) is used as the underlying robot controller to map instantaneous levels of motor imagery during the movement to the speed of contour following tasks. In other words, PVFC changes the speed of contour following tasks with respect to intention levels of motor imagery. PVFC also allows decoupling of the task and the speed of the task from each other, and ensures coupled stability of the overall robot patient system. The proposed framework is implemented on AssistOn-Mobile--a series elastic actuator based on a holonomic mobile platform, and feasibility studies with healthy volunteers have been conducted test effectiveness of the proposed approach. Giving patients online control over the speed of the task, the proposed approach ensures active involvement of patients throughout exercise routines and has the potential to increase the efficacy of robot assisted therapies. PMID:24187241

  6. Unified Approach To Control Of Motions Of Mobile Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1995-01-01

    Improved computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Present scheme similar to one described in "Coordinated Control of Mobile Robotic Manipulators" (NPO-19109). Both schemes based on configuration-control formalism. Present one incorporates explicit distinction between holonomic and nonholonomic constraints. Several other prior articles in NASA Tech Briefs discussed aspects of configuration-control formalism. These include "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes with Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).

  7. Autonomous Navigation by a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Aghazarian, Hrand

    2005-01-01

    ROAMAN is a computer program for autonomous navigation of a mobile robot on a long (as much as hundreds of meters) traversal of terrain. Developed for use aboard a robotic vehicle (rover) exploring the surface of a remote planet, ROAMAN could also be adapted to similar use on terrestrial mobile robots. ROAMAN implements a combination of algorithms for (1) long-range path planning based on images acquired by mast-mounted, wide-baseline stereoscopic cameras, and (2) local path planning based on images acquired by body-mounted, narrow-baseline stereoscopic cameras. The long-range path-planning algorithm autonomously generates a series of waypoints that are passed to the local path-planning algorithm, which plans obstacle-avoiding legs between the waypoints. Both the long- and short-range algorithms use an occupancy-grid representation in computations to detect obstacles and plan paths. Maps that are maintained by the long- and short-range portions of the software are not shared because substantial localization errors can accumulate during any long traverse. ROAMAN is not guaranteed to generate an optimal shortest path, but does maintain the safety of the rover.

  8. Robotic mobile servicing platform for space station

    NASA Technical Reports Server (NTRS)

    Lowenthal, S. H.; Vanerden, L.

    1987-01-01

    The semi-autonomous inspection and servicing of the Space Station's major thermal, electrical, mechanical subsystems are critical needs for the safe and reliable operation of the station. A conceptual design is presented of a self-intelligent, small and highly mobile robotic platform. Equipped with suitable inspection sensors (cameras, ammonia detectors, etc.), this system's primary mission is to perform routine, autonomous inspection of the Station's primary subsystems. Typical tasks include detection of leaks from thermal fluid or refueling lines, as well as detection of micro-meteroid damage to the primary structure. Equipped with stereo cameras and a dexterous manipulator, simple teleoperator repairs and small On-orbit Replacement Unit (ORU) changeout can also be accomplished. More difficult robotic repairs would be left to the larger, more sophisticated Mobile Remote Manipulator System (MRMS). An ancillary function is to ferry crew members and equipment around the station. The primary design objectives were to provide a flexible, but uncomplicated robotic platform, one which caused minimal impact to the design of the Station's primary structure but could accept more advanced telerobotic technology as it evolves.

  9. Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989

    SciTech Connect

    Wolfe, W.J.; Chun, W.H.

    1990-01-01

    The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment, behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.

  10. A robot safety experiment varying robot speed and contrast with human decision cost.

    PubMed

    Etherton, J; Sneckenberger, J E

    1990-09-01

    An industrial robot safety experiment was performed to find out how quickly subjects responded to an unexpected robot motion at different speeds of the robot arm, and how frequently they failed to detect a motion that should have been detected. Robotics technicians risk being fatally injured if a robot should trap them against a fixed object. The value of the experimentation lies in its ability to show that this risk can be reduced by a design change. If the robot is moving at a slow speed, during programming and troubleshooting tasks, then the worker has sufficient time to actuate an emergency stop device before the robot can reach the person. The dependent variable in the experiment was the overrun distance (beyond an expected stopping point) that a robot arm travelled before a person actuated a stop pushbutton. Results of this experiment demonstrated that the speed of the robot arm and the implied decision cost for hitting an emergency stop button had a significant effect on human reaction time. At a fairly high level of ambient lighting (560 lux), fixed-level changes in the luminance contrast between the robot arm and its background did not significantly affect human reaction time.

  11. Indoor Mobile Robot Navigation by Central Following Based on Monocular Vision

    NASA Astrophysics Data System (ADS)

    Saitoh, Takeshi; Tada, Naoya; Konishi, Ryosuke

    This paper develops the indoor mobile robot navigation by center following based on monocular vision. In our method, based on the frontal image, two boundary lines between the wall and baseboard are detected. Then, the appearance based obstacle detection is applied. When the obstacle exists, the avoidance or stop movement is worked according to the size and position of the obstacle, and when the obstacle does not exist, the robot moves at the center of the corridor. We developed the wheelchair based mobile robot. We estimated the accuracy of the boundary line detection, and obtained fast processing speed and high detection accuracy. We demonstrate the effectiveness of our mobile robot by the stopping experiments with various obstacles and moving experiments.

  12. Sensor fusion for mobile robot navigation

    SciTech Connect

    Kam, M.; Zhu, X.; Kalata, P.

    1997-01-01

    The authors review techniques for sensor fusion in robot navigation, emphasizing algorithms for self-location. These find use when the sensor suite of a mobile robot comprises several different sensors, some complementary and some redundant. Integrating the sensor readings, the robot seeks to accomplish tasks such as constructing a map of its environment, locating itself in that map, and recognizing objects that should be avoided or sought. The review describes integration techniques in two categories: low-level fusion is used for direct integration of sensory data, resulting in parameter and state estimates; high-level fusion is used for indirect integration of sensory data in hierarchical architectures, through command arbitration and integration of control signals suggested by different modules. The review provides an arsenal of tools for addressing this (rather ill-posed) problem in machine intelligence, including Kalman filtering, rule-based techniques, behavior based algorithms and approaches that borrow from information theory, Dempster-Shafer reasoning, fuzzy logic and neural networks. It points to several further-research needs, including: robustness of decision rules; simultaneous consideration of self-location, motion planning, motion control and vehicle dynamics; the effect of sensor placement and attention focusing on sensor fusion; and adaptation of techniques from biological sensor fusion.

  13. SyRoTek--Distance Teaching of Mobile Robotics

    ERIC Educational Resources Information Center

    Kulich, M.; Chudoba, J.; Kosnar, K.; Krajnik, T.; Faigl, J.; Preucil, L.

    2013-01-01

    E-learning is a modern and effective approach for training in various areas and at different levels of education. This paper gives an overview of SyRoTek, an e-learning platform for mobile robotics, artificial intelligence, control engineering, and related domains. SyRoTek provides remote access to a set of fully autonomous mobile robots placed in…

  14. Meeting the challenges of installing a mobile robotic system

    NASA Technical Reports Server (NTRS)

    Decorte, Celeste

    1994-01-01

    The challenges of integrating a mobile robotic system into an application environment are many. Most problems inherent to installing the mobile robotic system fall into one of three categories: (1) the physical environment - location(s) where, and conditions under which, the mobile robotic system will work; (2) the technological environment - external equipment with which the mobile robotic system will interact; and (3) the human environment - personnel who will operate and interact with the mobile robotic system. The successful integration of a mobile robotic system into these three types of application environment requires more than a good pair of pliers. The tools for this job include: careful planning, accurate measurement data (as-built drawings), complete technical data of systems to be interfaced, sufficient time and attention of key personnel for training on how to operate and program the robot, on-site access during installation, and a thorough understanding and appreciation - by all concerned - of the mobile robotic system's role in the security mission at the site, as well as the machine's capabilities and limitations. Patience, luck, and a sense of humor are also useful tools to keep handy during a mobile robotic system installation. This paper will discuss some specific examples of problems in each of three categories, and explore approaches to solving these problems. The discussion will draw from the author's experience with on-site installations of mobile robotic systems in various applications. Most of the information discussed in this paper has come directly from knowledge learned during installations of Cybermotion's SR2 security robots. A large part of the discussion will apply to any vehicle with a drive system, collision avoidance, and navigation sensors, which is, of course, what makes a vehicle autonomous. And it is with these sensors and a drive system that the installer must become familiar in order to foresee potential trouble areas in the

  15. Towards an in vivo wireless mobile robot for surgical assistance.

    PubMed

    Hawks, Jeff A; Rentschler, Mark E; Redden, Lee; Infanger, Roger; Dumpert, Jason; Farritor, Shane; Oleynikov, Dmitry; Platt, Stephen R

    2008-01-01

    The use of miniature in vivo robots that fit entirely inside the peritoneal cavity represents a novel approach to laparoscopic surgery. Previous work has demonstrated that mobile and fixed-base in vivo robots can be used to improve visualization of the surgical field and perform surgical tasks such as collecting biopsy tissue samples. All of these robots used tethers to provide for power and data transmission. This paper describes recent work focused on developing a modular wireless mobile platform that could be used for in vivo robotic sensing and manipulation applications. One vision for these types of self-contained in vivo robotic devices is that they could be easily carried and deployed by non-medical personnel at the site of an injury. Such wireless in vivo robots are much more transportable and lower cost than current robotic surgical assistants, and could ultimately allow a surgeon to become a remote first responder irrespective of the location of the patient. PMID:18391277

  16. Mobile robotics activities in DOE laboratories

    NASA Astrophysics Data System (ADS)

    Lujan, Ron; Harbour, Jerry; Feddema, John; Bailey, Sharon; Barhen, Jacob; Reister, David

    2005-05-01

    This paper will briefly outline major activities in Department of Energy (DOE) Laboratories focused on mobile platforms, both Unmanned Ground Vehicles (UGV"s) as well as Unmanned Air Vehicles (UAV's). The activities will be discussed in the context of the science and technology construct used by the DOE Technology Roadmap for Robotics and Intelligent Machines (RIM)1 published in 1998; namely, Perception, Reasoning, Action, and Integration. The activities to be discussed span from research and development to deployment in field operations. The activities support customers in other agencies. The discussion of "perception" will include hyperspectral sensors, complex patterns discrimination, multisensor fusion and advances in LADAR technologies, including real-world perception. "Reasoning" activities to be covered include cooperative controls, distributed systems, ad-hoc networks, platform-centric intelligence, and adaptable communications. The paper will discuss "action" activities such as advanced mobility and various air and ground platforms. In the RIM construct, "integration" includes the Human-Machine Integration. Accordingly the paper will discuss adjustable autonomy and the collaboration of operator(s) with distributed UGV's and UAV's. Integration also refers to the applications of these technologies into systems to perform operations such as perimeter surveillance, large-area monitoring and reconnaissance. Unique facilities and test beds for advanced mobile systems will be described. Given that this paper is an overview, rather than delve into specific detail in these activities, other more exhaustive references and sources will be cited extensively.

  17. Mobile Robotics Activities in DOE Laboratories

    SciTech Connect

    Ron Lujan; Jerry Harbour; John T. Feddema; Sharon Bailey; Jacob Barhen; David Reister

    2005-03-01

    This paper will briefly outline major activities in Department of Energy (DOE) Laboratories focused on mobile platforms, both Unmanned Ground Vehicles (UGV’s) as well as Unmanned Air Vehicles (UAV’s). The activities will be discussed in the context of the science and technology construct used by the DOE Technology Roadmap for Robotics and Intelligent Machines (RIM)1 published in 1998; namely, Perception, Reasoning, Action, and Integration. The activities to be discussed span from research and development to deployment in field operations. The activities support customers in other agencies. The discussion of "perception" will include hyperspectral sensors, complex patterns discrimination, multisensor fusion and advances in LADAR technologies, including real-world perception. "Reasoning" activities to be covered include cooperative controls, distributed systems, ad-hoc networks, platform-centric intelligence, and adaptable communications. The paper will discuss "action" activities such as advanced mobility and various air and ground platforms. In the RIM construct, "integration" includes the Human-Machine Integration. Accordingly the paper will discuss adjustable autonomy and the collaboration of operator(s) with distributed UGV’s and UAV’s. Integration also refers to the applications of these technologies into systems to perform operations such as perimeter surveillance, large-area monitoring and reconnaissance. Unique facilities and test beds for advanced mobile systems will be described. Given that this paper is an overview, rather than delve into specific detail in these activities, other more exhaustive references and sources will be cited extensively.

  18. Robotics.

    ERIC Educational Resources Information Center

    Waddell, Steve; Doty, Keith L.

    1999-01-01

    "Why Teach Robotics?" (Waddell) suggests that the United States lags behind Europe and Japan in use of robotics in industry and teaching. "Creating a Course in Mobile Robotics" (Doty) outlines course elements of the Intelligent Machines Design Lab. (SK)

  19. World map based on RFID tags for indoor mobile robots

    NASA Astrophysics Data System (ADS)

    Tsukiyama, Toshifumi

    2005-10-01

    A new navigation method is described for an indoor mobile robot. The robot system is composed of a Radio Frequency Identification (RFID) tag sensor and a commercial three-wheel mobile platform with ultrasonic rangefinders. The RFID tags are used as landmarks for navigation and the topological relation map which shows the connection of scattered tags through the environment is used as course instructions to a goal. The robot automatically follows paths using the ultrasonic rangefinders until a tag is found and then refers the next movement to the topological map for a decision. Our proposed technique would be useful for real-world robotic applications such as intelligent navigation for motorized wheelchairs.

  20. Omnivision-based autonomous mobile robotic platform

    NASA Astrophysics Data System (ADS)

    Cao, Zuoliang; Hu, Jun; Cao, Jin; Hall, Ernest L.

    2001-10-01

    As a laboratory demonstration platform, TUT-I mobile robot provides various experimentation modules to demonstrate the robotics technologies that are involved in remote control, computer programming, teach-and-playback operations. Typically, the teach-and-playback operation has been proved to be an effective solution especially in structured environments. The path generated in the teach mode and path correction in real-time using path error detecting in the playback mode are demonstrated. The vision-based image database is generated as the given path representation in the teaching procedure. The algorithm of an online image positioning is performed for path following. Advanced sensory capability is employed to provide environment perception. A unique omni directional vision (omni-vision) system is used for localization and navigation. The omni directional vision involves an extremely wide-angle lens, which has the feature that a dynamic omni-vision image is processed in real time to respond the widest view during the movement. The beacon guidance is realized by observing locations of points derived from over-head features such as predefined light arrays in a building. The navigation approach is based upon the omni-vision characteristics. A group of ultrasonic sensors is employed for obstacle avoidance.

  1. Multimedia modeling of autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Zada, Fatma; Guirguis, S.

    1997-09-01

    Modeling of autonomous mobile robots (AMRs) is sought to enable the designers to investigate various aspects of the design before the actual implementation takes place. Simulation techniques are undoubtedly enriching the design tools, by which the designer would be able to vary the design parameters as appropriate until achieving some optimal performance point. Although they are general purpose, multimedia tools, especially authoring tools, can give the AMR designer some degree of assistance in fulfilling his simulation task as fast as possible. This rapid prototyping tool is rather cost effective, and allow the designer to interactively manipulate his design in simple steps. In this paper, a multimedia environment has been constructed to enable designers to simulate AMRs in order to investigate aspects concerning their kinematics and dynamics. It is also sought that these design experiences can be gathered and categorized in a tutoring system that could be used by practitioners and students enrolled in highly technical courses such as robotics. The rich multimedia environment can assist the learner in so many ways by devising examples, suggesting solutions and design tradeoffs that have been experienced before.

  2. Steering control system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Kolli, Kaylan C.; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe the design of a steering mechanism for an autonomous mobile robot. The steering mechanism replaces a manually turned rack and pinion arrangement with a crank mechanism driven by a linear actuator that in turn is powered by a brushless dc motor. The system was modeled, analyzed, and redesigned to meet the requirements. A 486 computer through a 3-axis motion controller supervises the steering control. The steering motor is a brushless dc motor powered by three phase signals. It is run in current loop mode. The steering control system is supervised by a personal computer through a multi-axis motion controller. Testing of these systems has been done in the lab as well as on an outside test track with positive results.

  3. Reactive navigational controller for autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Hawkins, Scott

    1993-12-01

    Autonomous mobile robots must respond to external challenges and threats in real time. One way to satisfy this requirement is to use a fast low level intelligence to react to local environment changes. A fast reactive controller has been implemented which performs the task of real time local navigation by integrating primitive elements of perception, planning, and control. Competing achievement and constraint behaviors are used to allow abstract qualitative specification of navigation goals. An interface is provided to allow a higher level deliberative intelligence with a more global perspective to set local goals for the reactive controller. The reactive controller's simplistic strategies may not always succeed, so a means to monitor and redirect the reactive controller is provided.

  4. An evolutional artificial potential field algorithm based on the anisotropy of omnidirectional mobile robot

    NASA Astrophysics Data System (ADS)

    Cao, Qixin; Leng, Chuntao; Huang, Yanwen

    2007-12-01

    The traditional artificial potential field (APF) method is widely used for motion planning of traditional mobile robot, but there is little research about the application to the omnidirectional mobile robot (OMR). To propose a more suitable motion planning for OMR, an evolutional APF is presented in this paper, by introducing the revolving factor into the APF. The revolving factor synthesizes the anisotropy of OMR and the affect of dynamic environment. Finally simulation is carried out to demonstrate that, the evolutional APF is a high-speed and high-efficiency motion planning by comparing with the traditional APF, and the advantages of OMR is exerted.

  5. Navigation strategies for multiple autonomous mobile robots moving in formation

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1991-01-01

    The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.

  6. Local path planning of a mobile robot using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Rubo; Zhang, Guoyin; Gu, Guochang

    1998-08-01

    The local path planning of mobile robots can be regarded as finding a mapping from perception space to action space. Genetic algorithm is used to search optimal mapping in this paper so as to improve the obstacle avoidance ability of the robot. In this paper, the rotational angle and translation distance of the robot is divided into seven and four grades respectively. In addition, the length of the path that the robot covers before collision with obstacle is taken as fitness. The robot can learn to carry out local path planning through selection, crossover and mutation in genetic algorithm. The simulation results are given at the and of this paper.

  7. Embedded mobile farm robot for identification of diseased plants

    NASA Astrophysics Data System (ADS)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  8. Mobile robot navigation with vision-based neural networks

    NASA Astrophysics Data System (ADS)

    Inigo, Rafael M.; Torres, Raul E.

    1995-01-01

    Mobile robot technology is spreading its use in the development of advance manufacturing systems. Methods of multi-sensory fusion data with vision, sonar and limit switches have been developed as the most flexible, but expensive approaches. Other approaches are more common such as buried wire AGV's. They decrease the cost of the mobile robot, but degrade the flexibility of the navigation system as well. This paper uses neural networks (NNs) with only one camera to obtain similar flexibility as the high cost approaches, but in a cost-efficient way. The NNs use translation and perspective information of features in images to determine the proper alignment and position of the mobile robot.

  9. Toward the Automated Synthesis of Cooperative Mobile Robot Teams

    SciTech Connect

    Parker, L.E.

    1998-11-01

    A current limitation in the real-world use of cooperating mobiIe robots is the difficulty in determining the proper team composition for a given robotic application. Present technology restricts the design and implementation of cooperative robot teams to the expertise of a robotics researcher, who has to develop robot teams on an application-specific basis. The objective of our research is to reduce the complexity of cooperative robotic systems through the development of a methodology that enables the automated synthesis of cooperative robot teams. We propose an approach to this problem that uses a combination of the theories of sensori-computational systems and information invariants, building on the earlier work of Donald, Rus, et al. We describe the notion of defining equivalence classes that serve as fundamental building blocks of more complex cooperative mobile robot behaviors. We postulate a methodology for framing mission requirements in terms of the goals and constraints of the problem, incorporating issues such as multi-robot interference, communication, control strategy, robot complexity, and so forth, into the mechanism. Our initial work restricts the robot application and design space to three multi-robot application domains we have previously studied and implemented: keeping formation, "mock" hazardous waste cleanup, and cooperative observation. This paper presents the foundational ideas upon which our approach to cooperative team design is based. Keywords: Cooperative behaviors, behavior synthesis, multi-robot learning

  10. Autonomous Mobile Platform for Research in Cooperative Robotics

    NASA Technical Reports Server (NTRS)

    Daemi, Ali; Pena, Edward; Ferguson, Paul

    1998-01-01

    This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.

  11. Localization/mapping motion control system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Yang-Syu, Jr.; Su, Chiun-Shiang; Yang, Chan-Yun

    2011-12-01

    The objective of this paper is to design a mobile robot with automatic motion behaviors and obstacle avoidance functions. The robot is also able to make the SLAM (Simultaneous Localization And Mapping) at an unknown environment. The robot position is calculated by the developed software program from the motor encoders. An obstacle avoidance controller is developed by the fuzzy theory. A LRF(laser ranger finder) is installed on the robot. The sensing data of this LRF are applied to calculate the environmental information for the obstacle avoidance controller. Then, the ICP (Iterative Closest Point) algorithm is applied to compare the position error of the environmental data in order to obtain the estimated position of the LRF. Finally, these estimated position data are used to calculate the final SLAM of this mobile robot. Both the simulation and experimental results show that this developed robot system work very well.

  12. Autonomous Mobile Robot Navigation Using Harmonic Potential Field

    NASA Astrophysics Data System (ADS)

    Panati, Subbash; Baasandorj, Bayanjargal; Chong, Kil To

    2015-05-01

    Mobile robot navigation has been an area of robotics which has gained massive attention among the researchers of robotics community. Path planning and obstacle avoidance are the key aspects of mobile robot navigation. This paper presents harmonic potential field based navigation algorithm for mobile robots. Harmonic potential field method overcomes the issue of local minima which was a major bottleneck in the case of artificial potential field method. The harmonic potential field is calculated using harmonic functions and Dirichlet boundary conditions are used for the obstacles, goal and initial position. The simulation results shows that the proposed method is able to overcome the local minima issue and navigate successfully from initial position to the goal without colliding into obstacles in static environment.

  13. Research of robot simultaneous localization and mapping in multiple mobile robot system

    NASA Astrophysics Data System (ADS)

    Huang, Yanbiao; Yang, Yimin; He, Qicheng; Zhang, Xuexi

    2007-11-01

    It needs an exact global map when the multiple mobile robot system making decisions in the task allocation and action control, but each robot can only obtain the information which surrounds him nearby and often lose the information at long bowls, in other words, the robot can not build a comprehensive global map of whole field. Aimed at these problems, a multi-sensor data fusion subsystem was designed and added into the multiple mobile robot system. The experiment shows the whole system's fault-tolerance capability and identifying capability are both enhanced evidently.

  14. A Contest-Oriented Project for Learning Intelligent Mobile Robots

    ERIC Educational Resources Information Center

    Huang, Hsin-Hsiung; Su, Juing-Huei; Lee, Chyi-Shyong

    2013-01-01

    A contest-oriented project for undergraduate students to learn implementation skills and theories related to intelligent mobile robots is presented in this paper. The project, related to Micromouse, Robotrace (Robotrace is the title of Taiwanese and Japanese robot races), and line-maze contests was developed by the embedded control system research…

  15. An Autonomous Mobile Robot for Tsukuba Challenge: JW-Future

    NASA Astrophysics Data System (ADS)

    Fujimoto, Katsuharu; Kaji, Hirotaka; Negoro, Masanori; Yoshida, Makoto; Mizutani, Hiroyuki; Saitou, Tomoya; Nakamura, Katsu

    “Tsukuba Challenge” is the only of its kind to require mobile robots to work autonomously and safely on public walkways. In this paper, we introduce the outline of our robot “JW-Future”, developed for this experiment based on an electric wheel chair. Additionally, the significance of participation to such a technical trial is discussed from the viewpoint of industries.

  16. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    ERIC Educational Resources Information Center

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant…

  17. Object detection techniques applied on mobile robot semantic navigation.

    PubMed

    Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto

    2014-01-01

    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency. PMID:24732101

  18. Object Detection Techniques Applied on Mobile Robot Semantic Navigation

    PubMed Central

    Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto

    2014-01-01

    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency. PMID:24732101

  19. ARK: Autonomous mobile robot in an industrial environment

    NASA Technical Reports Server (NTRS)

    Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.

    1994-01-01

    This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.

  20. Object detection techniques applied on mobile robot semantic navigation.

    PubMed

    Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto

    2014-04-11

    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency.

  1. Interaction dynamics of multiple mobile robots with simple navigation strategies

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1989-01-01

    The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.

  2. Modeling and Control of Wheeled Mobile Robots.

    NASA Astrophysics Data System (ADS)

    Muir, Patrick Fred

    The accurate model-based servo-control of wheeled mobile robots (WMRs) relies upon the formulation of realistic kinematic and dynamic models. We identify six special WMR characteristics (closed-chains, higher-pair joints, unactuated and unsensed joints, friction, and pulse-width modulation) that require methodologies for modeling and control beyond those conventionally applied to stationary manipulators. Then, we develop methodologies for the kinematic and dynamic modeling of robotic mechanisms incorporating these special characteristics. We introduce instantaneously coincident coordinate systems and the wheel Jacobian to resolve WMR kinematic modeling. We introduce the concepts of force/torque propagation and frictional coupling at a joint to formulate a powerful unifying dynamic modeling framework. We compute the inverse and forward kinematic and dynamic solutions for model-based WMR servo-control and simulation. We demonstrate the applicability of (kinematics -based) resolved motion rate and (dynamics-based) resolved acceleration servo-control methodologies to WMRs through computer simulation evaluation studies. We exemplify our modeling and servo-control methodologies through Uranus, a three degree-of-freedom (DOF) WMR, and Bicsun-Bicas, a two DOF WMR. Our results show that resolved motion rate servo-control is adequate for general-purpose applications of Uranus. In contrast, the mechanically simpler Bicsun -Bicas requires the computationaly complex resolved acceleration servo-control to compensate for the significant coupling and nonlinear components in its dynamic model. We recommend Bicsun-Bicas with resolved acceleration servo-control for general-purpose indoor applications because it is mechanically simple, capable of tracking any spatial x-y path, and if a turret is added, provides onboard manipulators, sensors, or docking instruments with three DOFs.

  3. Execution monitoring for a mobile robot system

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1990-01-01

    Due to sensor errors, uncertainty, incomplete knowledge, and a dynamic world, robot plans will not always be executed exactly as planned. This paper describes an implemented robot planning system that enhances the traditional sense-think-act cycle in ways that allow the robot system monitor its behavior and react in emergencies in real-time. A proposal on how robot systems can completely break away from the traditional three-step cycle is also made.

  4. Vibration suppression of speed-controlled robots with nonlinear control

    NASA Astrophysics Data System (ADS)

    Boscariol, Paolo; Gasparetto, Alessandro

    2016-06-01

    In this paper, a simple nonlinear control strategy for the simultaneous position tracking and vibration damping of robots is presented. The control is developed for devices actuated by speed-controlled servo drives. The conditions for the asymptotic stability of the closed-loop system are derived by ensuring its passivity. The capability of achieving improved trajectory tracking and vibration suppression is shown through experimental tests conducted on a three-axis Cartesian robot. The control is aimed to be compatible with most industrial applications given the simplicity of implementation, the reduced computational requirements, and the use of joint position as the only measured signal.

  5. Verification hybrid control of a wheeled mobile robot and manipulator

    NASA Astrophysics Data System (ADS)

    Muszynska, Magdalena; Burghardt, Andrzej; Kurc, Krzysztof; Szybicki, Dariusz

    2016-04-01

    In this article, innovative approaches to realization of the wheeled mobile robots and manipulator tracking are presented. Conceptions include application of the neural-fuzzy systems to compensation of the controlled system's nonlinearities in the tracking control task. Proposed control algorithms work on-line, contain structure, that adapt to the changeable work conditions of the controlled systems, and do not require the preliminary learning. The algorithm was verification on the real object which was a Scorbot - ER 4pc robotic manipulator and a Pioneer - 2DX mobile robot.

  6. Cooperation of mobile robots for accident scene inspection

    NASA Astrophysics Data System (ADS)

    Byrne, R. H.; Harrington, J.

    A telerobotic system demonstration was developed for the Department of Energy's Accident Response group to highlight the applications of telerobotic vehicles to accident site inspection. The proof-of-principle system employs two mobile robots, Dixie and RAYBOT, to inspect a simulated accident site. Both robots are controlled serially from a single driving station, allowing an operator to take advantage of having multiple robots at the scene. The telerobotic system is described and some of the advantages of having more than one robot present are discussed. Future plans for the system are also presented.

  7. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  8. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-12-26

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  9. BALI development environment for small mobile robots

    NASA Astrophysics Data System (ADS)

    Lim, Willie Y.

    1995-12-01

    The design and prototyping of a development environment, called BALI, for a small robot, viz., the MIT 6.270 robot, is presented in this paper. BALI is being developed and used for research work using a 6.270-based robot. Building on the experience with IC (interactive-C) for programming the 6.270 robot and new technologies like Java, a more powerful and low cost robot development environment is possible. The goal of BALI is to provide a flexible, customizable, and extensible development environment so that robot researchers can quickly tailor BALI to their robots. Given that the 6.270 robot is really a building kit made up of LEGO blocks (or similar kinds of physical building blocks), the 68HC11-based motherboard, and a variety of sensors, BALI cannot be specially built for one 'instance' of the 6.270 robot. Rather the guiding principles for building BALI should be to provide the GUI (graphical user interface) 'primitives' from which one can assemble and build his or her development environment. Thus GUI primitives for displaying status information, sensor readings, robot orientation, and environment maps must be provided. Much of these primitives are already provided in Java. It is the robot-specific ones that have to be developed for BALI. The Java- like language that forms the core of BALI is the main focus of this paper.

  10. Hazardous materials emergency response mobile robot

    NASA Technical Reports Server (NTRS)

    Stone, Henry W. (Inventor); Lloyd, James W. (Inventor); Alahuzos, George A. (Inventor)

    1995-01-01

    A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm is presented. This robot inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.

  11. Towards Principled Experimental Study of Autonomous Mobile Robots

    NASA Technical Reports Server (NTRS)

    Gat, Erann

    1995-01-01

    We review the current state of research in autonomous mobile robots and conclude that there is an inadequate basis for predicting the reliability and behavior of robots operating in unengineered environments. We present a new approach to the study of autonomous mobile robot performance based on formal statistical analysis of independently reproducible experiments conducted on real robots. Simulators serve as models rather than experimental surrogates. We demonstrate three new results: 1) Two commonly used performance metrics (time and distance) are not as well correlated as is often tacitly assumed. 2) The probability distributions of these performance metrics are exponential rather than normal, and 3) a modular, object-oriented simulation accurately predicts the behavior of the real robot in a statistically significant manner.

  12. Model-based vision system for mobile robot position estimation

    NASA Astrophysics Data System (ADS)

    D'Orazio, Tiziana; Capozzo, Liborio; Ianigro, Massimo; Distante, Arcangelo

    1994-02-01

    The development of an autonomous mobile robot is a central problem in artificial intelligence and robotics. A vision system can be used to recognize naturally occurring landmarks located in known positions. The problem considered here is that of finding the location and orientation of a mobile robot using a 3-D image taken by a CCD camera located on the robot. The naturally occurring landmarks that we use are the corners of the room extracted by an edge detection algorithm from a 2-D image of the indoor scene. Then, the location and orientation of the vehicle are calculated by perspective information of the landmarks in the scene of the room where the robot moves.

  13. Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri

    2002-01-01

    The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.

  14. Levels of autonomy control approach for mobile robots

    NASA Astrophysics Data System (ADS)

    Moorehead, Stewart J.

    2003-09-01

    Increasingly mobile robots are finding applications in the military, mining, nuclear and agriculture industries. These fields require a robot capable of operating in a highly unstructured and changing environment. Current autonomous control techniques are not robust enough to allow successful operation at all times in these environments. Teleoperation can help with many tasks but causes operator fatigue and negates much of the economic advantages of using robots by requiring one person per robot. This paper introduces a control system for mobile robots based on the concept of levels of autonomy. Levels of autonomy recognizes that control can be shared between the operator and robot in a continuous fashion from teleoperation to full autonomy. By sharing control, the robot can benefit from the operator's knowledge of the world to help extricate it from difficult situations. The robot can operate as autonomously as the situation allows, reducing operator fatigue and increasing the economic benefit by allowing a single operator to control multiple robots simultaneously. This paper presents a levels of autonomy control system developed for use in exploration or reconnaissance tasks.

  15. Low-level stored waste inspection using mobile robots

    SciTech Connect

    Byrd, J.S.; Pettus, R.O.

    1996-06-01

    A mobile robot inspection system, ARIES (Autonomous Robotic Inspection Experimental System), has been developed for the U.S. Department of Energy to replace human inspectors in the routine, regulated inspection of radioactive waste stored in drums. The robot will roam the three-foot aisles of drums, stacked four high, making decisions about the surface condition of the drums and maintaining a database of information about each drum. A distributed system of onboard and offboard computers will provide versatile, friendly control of the inspection process. This mobile robot system, based on a commercial mobile platform, will improve the quality of inspection, generate required reports, and relieve human operators from low-level radioactive exposure. This paper describes and discusses primarily the computer and control processes for the system.

  16. Hands-free operation of a small mobile robot

    SciTech Connect

    AMAI,WENDY A.; FAHRENHOLTZ,JILL C.; LEGER,CHRIS L.

    2000-03-14

    The Intelligent Systems and Robotics Center of Sandia National laboratories has an ongoing research program in advanced user interfaces. As part of this research, promising new transduction devices, particularly hands-free devices, are being explored for the control of mobile and floor-mounted robotic systems. Brainwave control has been successfully demonstrated by other researchers in a variety of fields. In the research described here, Sandia developed and demonstrated a proof-of-concept brainwave-controlled mobile robot system. Preliminary results were encouraging. Additional work required to turn this into a reliable. fieldable system for mobile robotic control is identified. Used in conjunction with other controls, brainwave control could be an effective control method in certain circumstances.

  17. Google glass-based remote control of a mobile robot

    NASA Astrophysics Data System (ADS)

    Yu, Song; Wen, Xi; Li, Wei; Chen, Genshe

    2016-05-01

    In this paper, we present an approach to remote control of a mobile robot via a Google Glass with the multi-function and compact size. This wearable device provides a new human-machine interface (HMI) to control a robot without need for a regular computer monitor because the Google Glass micro projector is able to display live videos around robot environments. In doing it, we first develop a protocol to establish WI-FI connection between Google Glass and a robot and then implement five types of robot behaviors: Moving Forward, Turning Left, Turning Right, Taking Pause, and Moving Backward, which are controlled by sliding and clicking the touchpad located on the right side of the temple. In order to demonstrate the effectiveness of the proposed Google Glass-based remote control system, we navigate a virtual Surveyor robot to pass a maze. Experimental results demonstrate that the proposed control system achieves the desired performance.

  18. Hazardous materials emergency response mobile robot

    NASA Technical Reports Server (NTRS)

    Stone, Henry W. (Inventor); Lloyd, James (Inventor); Alahuzos, George (Inventor)

    1992-01-01

    A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.

  19. Infrared Sensor System for Mobile-Robot Positioning in Intelligent Spaces

    PubMed Central

    Gorostiza, Ernesto Martín; Galilea, José Luis Lázaro; Meca, Franciso Javier Meca; Monzú, David Salido; Zapata, Felipe Espinosa; Puerto, Luis Pallarés

    2011-01-01

    The aim of this work was to position a Mobile Robot in an Intelligent Space, and this paper presents a sensorial system for measuring differential phase-shifts in a sinusoidally modulated infrared signal transmitted from the robot. Differential distances were obtained from these phase-shifts, and the position of the robot was estimated by hyperbolic trilateration. Due to the extremely severe trade-off between SNR, angle (coverage) and real-time response, a very accurate design and device selection was required to achieve good precision with wide coverage and acceptable robot speed. An I/Q demodulator was used to measure phases with one-stage synchronous demodulation to DC. A complete set of results from real measurements, both for distance and position estimations, is provided to demonstrate the validity of the system proposed, comparing it with other similar indoor positioning systems. PMID:22163907

  20. Soft Robots: Manipulation, Mobility, and Fast Actuation

    NASA Astrophysics Data System (ADS)

    Shepherd, Robert; Ilievski, Filip; Choi, Wonjae; Stokes, Adam; Morin, Stephen; Mazzeo, Aaron; Kramer, Rebecca; Majidi, Carmel; Wood, Rob; Whitesides, George

    2012-02-01

    Material innovation will be a key feature in the next generation of robots. A simple, pneumatically powered actuator composed of only soft-elastomers can perform the function of a complex arrangement of mechanical components and electric motors. This talk will focus on soft-lithography as a simple method to fabricate robots--composed of exclusively soft materials (elastomeric polymers). These robots have sophisticated capabilities: a gripper (with no electrical sensors) can manipulate delicate and irregularly shaped objects and a quadrupedal robot can walk to an obstacle (a gap smaller than its walking height) then shrink its body and squeeze through the gap using an undulatory gait. This talk will also introduce a new method of rapidly actuating soft robots. Using this new method, a robot can be caused to jump more than 30 times its height in under 200 milliseconds.

  1. On-Line Method and Apparatus for Coordinated Mobility and Manipulation of Mobile Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1996-01-01

    A simple and computationally efficient approach is disclosed for on-line coordinated control of mobile robots consisting of a manipulator arm mounted on a mobile base. The effect of base mobility on the end-effector manipulability index is discussed. The base mobility and arm manipulation degrees-of-freedom are treated equally as the joints of a kinematically redundant composite robot. The redundancy introduced by the mobile base is exploited to satisfy a set of user-defined additional tasks during the end-effector motion. A simple on-line control scheme is proposed which allows the user to assign weighting factors to individual degrees-of-mobility and degrees-of-manipulation, as well as to each task specification. The computational efficiency of the control algorithm makes it particularly suitable for real-time implementations. Four case studies are discussed in detail to demonstrate the application of the coordinated control scheme to various mobile robots.

  2. Evaluation of a Home Biomonitoring Autonomous Mobile Robot.

    PubMed

    Dorronzoro Zubiete, Enrique; Nakahata, Keigo; Imamoglu, Nevrez; Sekine, Masashi; Sun, Guanghao; Gomez, Isabel; Yu, Wenwei

    2016-01-01

    Increasing population age demands more services in healthcare domain. It has been shown that mobile robots could be a potential solution to home biomonitoring for the elderly. Through our previous studies, a mobile robot system that is able to track a subject and identify his daily living activities has been developed. However, the system has not been tested in any home living scenarios. In this study we did a series of experiments to investigate the accuracy of activity recognition of the mobile robot in a home living scenario. The daily activities tested in the evaluation experiment include watching TV and sleeping. A dataset recorded by a distributed distance-measuring sensor network was used as a reference to the activity recognition results. It was shown that the accuracy is not consistent for all the activities; that is, mobile robot could achieve a high success rate in some activities but a poor success rate in others. It was found that the observation position of the mobile robot and subject surroundings have high impact on the accuracy of the activity recognition, due to the variability of the home living daily activities and their transitional process. The possibility of improvement of recognition accuracy has been shown too. PMID:27212940

  3. Evaluation of a Home Biomonitoring Autonomous Mobile Robot.

    PubMed

    Dorronzoro Zubiete, Enrique; Nakahata, Keigo; Imamoglu, Nevrez; Sekine, Masashi; Sun, Guanghao; Gomez, Isabel; Yu, Wenwei

    2016-01-01

    Increasing population age demands more services in healthcare domain. It has been shown that mobile robots could be a potential solution to home biomonitoring for the elderly. Through our previous studies, a mobile robot system that is able to track a subject and identify his daily living activities has been developed. However, the system has not been tested in any home living scenarios. In this study we did a series of experiments to investigate the accuracy of activity recognition of the mobile robot in a home living scenario. The daily activities tested in the evaluation experiment include watching TV and sleeping. A dataset recorded by a distributed distance-measuring sensor network was used as a reference to the activity recognition results. It was shown that the accuracy is not consistent for all the activities; that is, mobile robot could achieve a high success rate in some activities but a poor success rate in others. It was found that the observation position of the mobile robot and subject surroundings have high impact on the accuracy of the activity recognition, due to the variability of the home living daily activities and their transitional process. The possibility of improvement of recognition accuracy has been shown too.

  4. Evaluation of a Home Biomonitoring Autonomous Mobile Robot

    PubMed Central

    Dorronzoro Zubiete, Enrique; Nakahata, Keigo; Imamoglu, Nevrez; Sekine, Masashi; Sun, Guanghao; Gomez, Isabel; Yu, Wenwei

    2016-01-01

    Increasing population age demands more services in healthcare domain. It has been shown that mobile robots could be a potential solution to home biomonitoring for the elderly. Through our previous studies, a mobile robot system that is able to track a subject and identify his daily living activities has been developed. However, the system has not been tested in any home living scenarios. In this study we did a series of experiments to investigate the accuracy of activity recognition of the mobile robot in a home living scenario. The daily activities tested in the evaluation experiment include watching TV and sleeping. A dataset recorded by a distributed distance-measuring sensor network was used as a reference to the activity recognition results. It was shown that the accuracy is not consistent for all the activities; that is, mobile robot could achieve a high success rate in some activities but a poor success rate in others. It was found that the observation position of the mobile robot and subject surroundings have high impact on the accuracy of the activity recognition, due to the variability of the home living daily activities and their transitional process. The possibility of improvement of recognition accuracy has been shown too. PMID:27212940

  5. Temporal coordination of perceptual algorithms for mobile robot navigation

    SciTech Connect

    Arkin, R.C.; MacKenzie, D. . Mobile Robot Lab.)

    1994-06-01

    A methodology for integrating multiple perceptual algorithms within a reactive robotic control system is presented. A model using finite state acceptors is developed as a means for expressing perceptual processing over space and time in the context of a particular motor behavior. This model can be utilized for a wide range of perceptual sequencing problems. The feasibility of this method is demonstrated in two separate implementations. The first is in the context of mobile robot docking where the mobile robot uses four different vision and ultrasonic algorithms to position itself relative to a docking workstation over a long-range course. The second uses vision, IR beacon, and ultrasonic algorithms to park the robot next to a desired plastic pole randomly placed within an arena.

  6. Rice-obot 1: An intelligent autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R.; Ciscon, L.; Berberian, D.

    1989-01-01

    The Rice-obot I is the first in a series of Intelligent Autonomous Mobile Robots (IAMRs) being developed at Rice University's Cooperative Intelligent Mobile Robots (CIMR) lab. The Rice-obot I is mainly designed to be a testbed for various robotic and AI techniques, and a platform for developing intelligent control systems for exploratory robots. Researchers present the need for a generalized environment capable of combining all of the control, sensory and knowledge systems of an IAMR. They introduce Lisp-Nodes as such a system, and develop the basic concepts of nodes, messages and classes. Furthermore, they show how the control system of the Rice-obot I is implemented as sub-systems in Lisp-Nodes.

  7. Remote radioactive waste drum inspection with an autonomous mobile robot

    SciTech Connect

    Heckendorn, F.M.; Ward, C.R.; Wagner, D.G.

    1992-11-01

    An autonomous mobile robot is being developed to perform remote surveillance and inspection task on large numbers of stored radioactive waste drums. The robot will be self guided through narrow storage aisles and record the visual image of each viewable drum for subsequent off line analysis and archiving. The system will remove the personnel from potential exposure to radiation, perform the require inspections, and improve the ability to assess the long term trends in drum conditions.

  8. Remote radioactive waste drum inspection with an autonomous mobile robot

    SciTech Connect

    Heckendorn, F.M.; Ward, C.R.; Wagner, D.G.

    1992-01-01

    An autonomous mobile robot is being developed to perform remote surveillance and inspection task on large numbers of stored radioactive waste drums. The robot will be self guided through narrow storage aisles and record the visual image of each viewable drum for subsequent off line analysis and archiving. The system will remove the personnel from potential exposure to radiation, perform the require inspections, and improve the ability to assess the long term trends in drum conditions.

  9. Efficient Control Law Simulation for Multiple Mobile Robots

    SciTech Connect

    Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.; Kwok, K.S.

    1998-10-06

    In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The time to calculate the control law for each robot at each time step is demonstrated to be O(logN).

  10. Object Transportation by Two Mobile Robots with Hand Carts.

    PubMed

    Sakuyama, Takuya; Figueroa Heredia, Jorge David; Ogata, Taiki; Hara, Tatsunori; Ota, Jun

    2014-01-01

    This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50-60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement. PMID:27433499

  11. Object Transportation by Two Mobile Robots with Hand Carts

    PubMed Central

    Hara, Tatsunori

    2014-01-01

    This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50–60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement. PMID:27433499

  12. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    PubMed Central

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  13. Intelligent mobility for robotic vehicles in the army after next

    NASA Astrophysics Data System (ADS)

    Gerhart, Grant R.; Goetz, Richard C.; Gorsich, David J.

    1999-07-01

    The TARDEC Intelligent Mobility program addresses several essential technologies necessary to support the army after next (AAN) concept. Ground forces in the AAN time frame will deploy robotic unmanned ground vehicles (UGVs) in high-risk missions to avoid exposing soldiers to both friendly and unfriendly fire. Prospective robotic systems will include RSTA/scout vehicles, combat engineering/mine clearing vehicles, indirect fire artillery and missile launch platforms. The AAN concept requires high on-road and off-road mobility, survivability, transportability/deployability and low logistics burden. TARDEC is developing a robotic vehicle systems integration laboratory (SIL) to evaluate technologies and their integration into future UGV systems. Example technologies include the following: in-hub electric drive, omni-directional wheel and steering configurations, off-road tires, adaptive tire inflation, articulated vehicles, active suspension, mine blast protection, detection avoidance and evasive maneuver. This paper will describe current developments in these areas relative to the TARDEC intelligent mobility program.

  14. Mobile robot navigation and control: A case study

    SciTech Connect

    Roy, N.; Dudek, G.; Daum, M.

    1996-12-31

    Robotic systems (and in particular mobile autonomous agents) embody a complex interaction of computational processes, mechanical systems, sensors, and communications hardware. System integration can present significant difficulties to the construction of a real system, because the hardware is often built around convenience of design rather than convenience of system integration. Nonetheless, in order for robots to perform real-world tasks such as navigation, localization and exploration, the different subsystems of motion, sensing and computation must be merged into a single, realisable unit. Our group is investigating particular problems in the domain of computational perception, in the context of mobile robotics. In particular, we are concerned with environment exploration, position estimation, and map construction. We have several mobile platforms integrating different sensing modalities, which we are able to control simultaneously from a single source.

  15. Optical flow based velocity estimation for mobile robots

    NASA Astrophysics Data System (ADS)

    Li, Xiuzhi; Zhao, Guanrong; Jia, Songmin; Qin, Baoling; Yang, Ailin

    2015-02-01

    This paper presents an optical flow based novel technique to perceive the instant motion velocity of mobile robots. The primary focus of this study is to determine the robot's ego-motion using displacement field in temporally consecutive image pairs. In contrast to most previous approaches for estimating velocity, we employ a polynomial expansion based dense optical flow approach and propose a quadratic model based RANSAC refinement of flow fields to render our method more robust with respect to noise and outliers. Accordingly, techniques for geometrical transformation and interpretation of the inter-frame motion are presented. Advantages of our proposal are validated by real experimental results conducted on Pioneer robot.

  16. Experimentation and concept formation by an autonomous mobile robot

    SciTech Connect

    Spelt, P.F.; deSaussure, G.; Oliver, G.; Silliman, M.

    1990-01-01

    The Center for Engineering Systems Advanced Research (CESAR) conducts basic research in the area of intelligent machines. In this paper, we describe our approach to a class of machine learning which involves autonomous concept formation using feedback from trial-and-error experimentation with the environment. Our formulation was experimentally validated on an autonomous mobile robot, which learned the task of control panel monitoring and manipulation for effective process control. Conclusions are drawn concerning the applicability of the system to a more general class of learning problems, and implications for the use of autonomous mobile robots in hostile and unknown environments are discussed. 11 refs., 7 figs.

  17. Computer Security For Mobile Robots: Attacks And Counter Strategies

    NASA Astrophysics Data System (ADS)

    Hogge, Sharon M.

    1987-02-01

    The objective of this work is to investigate the security requirements and strategies for intelligent mobile robots, perform tests to determine strengths and weaknesses of test bed platforms, and develop counter strategies to improve security of the test bed platforms. This research will discuss the implications of these results on large scale ongoing efforts in mobile robotics. Potential security threats range from accidental intrusion of the device's hardware or software by untrained personnel to deliberate "spoofing" of sensor suites by unauthorized users or enemies.

  18. Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance

    PubMed Central

    Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W.

    2016-01-01

    An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller—advanced fuzzy potential field method (AFPFM)—that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot. PMID:27123001

  19. Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance.

    PubMed

    Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W

    2016-01-01

    An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller--advanced fuzzy potential field method (AFPFM)--that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot. PMID:27123001

  20. Collective search by mobile robots using alpha-beta coordination

    SciTech Connect

    Goldsmith, S.Y.; Robinett, R. III

    1998-04-01

    One important application of mobile robots is searching a geographical region to locate the origin of a specific sensible phenomenon. Mapping mine fields, extraterrestrial and undersea exploration, the location of chemical and biological weapons, and the location of explosive devices are just a few potential applications. Teams of robotic bloodhounds have a simple common goal; to converge on the location of the source phenomenon, confirm its intensity, and to remain aggregated around it until directed to take some other action. In cases where human intervention through teleoperation is not possible, the robot team must be deployed in a territory without supervision, requiring an autonomous decentralized coordination strategy. This paper presents the alpha beta coordination strategy, a family of collective search algorithms that are based on dynamic partitioning of the robotic team into two complementary social roles according to a sensor based status measure. Robots in the alpha role are risk takers, motivated to improve their status by exploring new regions of the search space. Robots in the beta role are motivated to improve but are conservative, and tend to remain aggregated and stationary until the alpha robots have identified better regions of the search space. Roles are determined dynamically by each member of the team based on the status of the individual robot relative to the current state of the collective. Partitioning the robot team into alpha and beta roles results in a balance between exploration and exploitation, and can yield collective energy savings and improved resistance to sensor noise and defectors. Alpha robots waste energy exploring new territory, and are more sensitive to the effects of ambient noise and to defectors reporting inflated status. Beta robots conserve energy by moving in a direct path to regions of confirmed high status.

  1. Tactical mobile robots for urban search and rescue

    NASA Astrophysics Data System (ADS)

    Blitch, John; Sidki, Nahid; Durkin, Tim

    2000-07-01

    Few disasters can inspire more compassion for victims and families than those involving structural collapse. Video clips of children's bodies pulled from earthquake stricken cities and bombing sties tend to invoke tremendous grief and sorrow because of the totally unpredictable nature of the crisis and lack of even the slightest degree of negligence (such as with those who choose to ignore storm warnings). Heartbreaking stories of people buried alive for days provide a visceral and horrific perspective of some of greatest fears ever to be imagined by human beings. Current trends toward urban sprawl and increasing human discord dictates that structural collapse disasters will continue to present themselves at an alarming rate. The proliferation of domestic terrorism, HAZMAT and biological contaminants further complicates the matter further and presents a daunting problem set for Urban Search and Rescue (USAR) organizations around the world. This paper amplifies the case for robot assisted search and rescue that was first presented during the KNOBSAR project initiated at the Colorado School of Mines in 1995. It anticipates increasing technical development in mobile robot technologies and promotes their use for a wide variety of humanitarian assistance missions. Focus is placed on development of advanced robotic systems that are employed in a complementary tool-like fashion as opposed to traditional robotic approaches that portend to replace humans in hazardous tasks. Operational challenges for USAR are presented first, followed by a brief history of mobiles robot development. The paper then presents conformal robotics as a new design paradigm with emphasis on variable geometry and volumes. A section on robot perception follows with an initial attempt to characterize sensing in a volumetric manner. Collaborative rescue is then briefly discussed with an emphasis on marsupial operations and linked mobility. The paper concludes with an emphasis on Human Robot Interface

  2. Intelligent mobility research for robotic locomotion in complex terrain

    NASA Astrophysics Data System (ADS)

    Trentini, Michael; Beckman, Blake; Digney, Bruce; Vincent, Isabelle; Ricard, Benoit

    2006-05-01

    The objective of the Autonomous Intelligent Systems Section of Defence R&D Canada - Suffield is best described by its mission statement, which is "to augment soldiers and combat systems by developing and demonstrating practical, cost effective, autonomous intelligent systems capable of completing military missions in complex operating environments." The mobility requirement for ground-based mobile systems operating in urban settings must increase significantly if robotic technology is to augment human efforts in these roles and environments. The intelligence required for autonomous systems to operate in complex environments demands advances in many fields of robotics. This has resulted in large bodies of research in areas of perception, world representation, and navigation, but the problem of locomotion in complex terrain has largely been ignored. In order to achieve its objective, the Autonomous Intelligent Systems Section is pursuing research that explores the use of intelligent mobility algorithms designed to improve robot mobility. Intelligent mobility uses sensing, control, and learning algorithms to extract measured variables from the world, control vehicle dynamics, and learn by experience. These algorithms seek to exploit available world representations of the environment and the inherent dexterity of the robot to allow the vehicle to interact with its surroundings and produce locomotion in complex terrain. The primary focus of the paper is to present the intelligent mobility research within the framework of the research methodology, plan and direction defined at Defence R&D Canada - Suffield. It discusses the progress and future direction of intelligent mobility research and presents the research tools, topics, and plans to address this critical research gap. This research will create effective intelligence to improve the mobility of ground-based mobile systems operating in urban settings to assist the Canadian Forces in their future urban operations.

  3. Curb Mounting, Vertical Mobility, and Inverted Mobility on Rough Surfaces Using Microspine-Enabled Robots

    NASA Technical Reports Server (NTRS)

    Parness, Aaron

    2012-01-01

    Three robots that extend microspine technology to enable advanced mobility are presented. First, the Durable Reconnaissance and Observation Platform (DROP) and the ReconRobotics Scout platform use a new rotary configuration of microspines to provide improved soldier-portable reconnaissance by moving rapidly over curbs and obstacles, transitioning from horizontal to vertical surfaces, climbing rough walls and surviving impacts. Next, the four-legged LEMUR robot uses new configurations of opposed microspines to anchor to both manmade and natural rough surfaces. Using these anchors as feet enables mobility in unstructured environments, from urban disaster areas to deserts and caves.

  4. A fuzzy logic controller for an autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Yen, John; Pfluger, Nathan

    1993-01-01

    The ability of a mobile robot system to plan and move intelligently in a dynamic system is needed if robots are to be useful in areas other than controlled environments. An example of a use for this system is to control an autonomous mobile robot in a space station, or other isolated area where it is hard or impossible for human life to exist for long periods of time (e.g., Mars). The system would allow the robot to be programmed to carry out the duties normally accomplished by a human being. Some of the duties that could be accomplished include operating instruments, transporting objects, and maintenance of the environment. The main focus of our early work has been on developing a fuzzy controller that takes a path and adapts it to a given environment. The robot only uses information gathered from the sensors, but retains the ability to avoid dynamically placed obstacles near and along the path. Our fuzzy logic controller is based on the following algorithm: (1) determine the desired direction of travel; (2) determine the allowed direction of travel; and (3) combine the desired and allowed directions in order to determine a direciton that is both desired and allowed. The desired direction of travel is determined by projecting ahead to a point along the path that is closer to the goal. This gives a local direction of travel for the robot and helps to avoid obstacles.

  5. Sensor-based navigation of air duct inspection mobile robots

    NASA Astrophysics Data System (ADS)

    Koh, Kyoungchul; Choi, H. J.; Kim, Jae-Seon; Ko, Kuk Won; Cho, Hyungsuck

    2001-02-01

    12 This paper deals with an image sensor system and its position estimation algorithm for autonomous duct cleaning and inspection mobile robots. For the real application, a hierarchical control structure that consists of robot motion controller and image sensor system is designed considering the efficient and autonomous motion behaviors in narrow space such as air ducts. The sensor's system consists of a CCD camera and two laser sources to generate slit beams. The image of the structured lights is used for calculating the geometric parameters of the air ducts which are usually designed with a rectangular section. With the acquired 3D information about the environment, the mobile robot with two differential driving wheels is able to autonomously navigates along the duct path without any human intervention. For real time navigation, the relative position estimation of the robot are performed from 3D image reconstructed by the sensor system. The calibration and image processing methods used for the sensor system are presented with the experimental data. The experimental results show the possibility of the sensor based navigation which is important for effective duct cleaning by small mobile robots.

  6. Soft mobile robots driven by foldable dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Sun, Wenjie; Liu, Fan; Ma, Ziqi; Li, Chenghai; Zhou, Jinxiong

    2016-08-01

    A cantilever beam with elastic hinge pulled antagonistically by two dielectric elastomer (DE) membranes in tension forms a foldable actuator if one DE membrane is subject to a voltage and releases part of tension. Simply placing parallel rigid bars on the prestressed DE membranes results in enhanced actuators working in a pure shear state. We report design, analysis, fabrication, and experiment of soft mobile robots that are moved by such foldable DE actuators. We describe systematic measurement of the foldable actuators and perform theoretical analysis of such actuators based on minimization of total energy, and a good agreement is achieved between model prediction and measurement. We develop two versions of prototypes of soft mobile robots driven either by two sets of DE membranes or one DE membrane and elastic springs. We demonstrate locomotion of these soft mobile robots and highlight several key design parameters that influence locomotion of the robots. A 45 g soft robot driven by a cyclic triangle voltage with amplitude 7.4 kV demonstrates maximal stroke 160 mm or maximal rolling velocity 42 mm/s. The underlying mechanics and physics of foldable DE actuators can be leveraged to develop other soft machines for various applications.

  7. Robotic personal aids for mobility and monitoring for the elderly.

    PubMed

    Spenko, Matthew; Yu, Haoyong; Dubowsky, Steven

    2006-09-01

    Two rehabilitation devices, or personal aids for mobility and monitoring (PAMM), for use by the elderly are presented. The devices are intended to delay the transition from eldercare (assisted living) facilities to nursing homes. The robotic PAMMs provide support, guidance, and health monitoring. Two experimental systems are described: a cane and a walker. Issues of mobility, sensing, and control, as well as experimental data from trials in an assisted living facility using both systems are presented.

  8. Global Localization and Concurrent Mapping for Mobile Robot on the robotic simulator ``SIMBAD''

    NASA Astrophysics Data System (ADS)

    Rachid, Boutine; Benmohamed, M.

    2009-03-01

    It was always a great challenge for the researchers, to build mobile robots able to explore and navigate in real environment. In this paper, we present a global localization, and concurrent mapping approach, implemented on a simulated robot, and tested in unknown virtual world. We use a particle filter to represent the posterior about the position and the heading of the robot, and a kalman filter to update the position of landmarks. In order to prove the convenience of our implementation, which is inspired from SLAM literature, we test it on SIMBAD simulator, and we illustrate some results.

  9. Application of a Chaotic Oscillator in an Autonomous Mobile Robot

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, Esteban; Ramos-López, Hugo C.; Sánchez-Sánchez, Mauro; Pano-Azucena, Ana D.; Sánchez-Gaspariano, Luis A.; Núñez-Pérez, José C.; Camas-Anzueto, Jorge L.

    2014-05-01

    Terrain exploration robots can be of great usefulness in critical navigation circumstances. However, the challenge is how to guarantee a control for covering a full terrain area. That way, the application of a chaotic oscillator to control the wheels of an autonomous mobile robot, is introduced herein. Basically, we describe the realization of a random number generator (RNG) based on a double-scroll chaotic oscillator, which is used to guide the robot to cover a full terrain area. The resolution of the terrain exploration area is determined by both the number of bits provided by the RNG and the characteristics of step motors. Finally, the experimental results highlight the covered area by painting the trajectories that the robot explores.

  10. Real-time control of the ANDROS mobile robot

    SciTech Connect

    Clifford, S.; Haddox, D.; Ekdahl, D.; Tulenko, J.S.

    1994-12-31

    The standard control capabilities of REMOTEC`s ANDROS Mark VA mobile robot is limited to a joystick, a joint control panel, and a television monitor. The ANDROS is equipped with one color pan-and-tilt camera and one fixed black-and-white camera. Using this standard configuraiton, an operator must always be present at the console for the robot to carry out even routine movements and tasks. In addition, the operator`s ability to judge spatial relationships between the robot`s end effector and target objects is limited by the two-dimensional camera image. Consequently, simple tasks such as grabbing an object are made difficult because of problems with depth perception and narrow field of view.

  11. Model Predictive Obstacle Avoidance and Wheel Allocation Control of Mobile Robots Using Embedded CPU

    NASA Astrophysics Data System (ADS)

    Takahashi, Naoki; Nonaka, Kenichiro

    In this study, we propose a real-time model predictive control method for leg/wheel mobile robots which simultaneously achieves both obstacle avoidance and wheel allocation at a flexible position. The proposed method generates both obstacle avoidance path and dynamical wheel positions, and controls the heading angle depending on the slope of the predicted path so that the robot can keep a good balance between stability and mobility in narrow and complex spaces like indoor environments. Moreover, we reduce the computational effort of the algorithm by deleting usage of mathematical function in the repetitive numerical computation. Thus the proposed real-time optimization method can be applied to low speed on-board CPUs used in commercially-produced vehicles. We conducted experiments to verify efficacy and feasibility of the real-time implementation of the proposed method. We used a leg/wheel mobile robot which is equipped with two laser range finders to detect obstacles and an embedded CPU whose clock speed is only 80MHz. Experiments indicate that the proposed method achieves improved obstacle avoidance comparing with the previous method in the sense that it generates an avoidance path with balanced allocation of right and left side wheels.

  12. Mobile robots for the nuclear industry - A 1990 status report

    SciTech Connect

    Meieran, H.B.

    1990-01-01

    Mobile robots with and without manipulating arms have been available for use in radioactive environments for almost 30 yr. Their use commenced in the early 1960s with a family of mobile robots manufactured by the PAR Corporation (now the PAR division of CIMCORP). It was a tethered, two-tracked teleoperator-controlled vehicle that supported one master-slave manipulating arm. The durability of this device is continuing to be demonstrated by HERMAN, which is currently on standby availability at the Oak Ridge National Laboratory (ORNL) to respond to emergency situations by supporting mitigating actions at scenes of incidents that involve the release of radioactive material. Mobile robots are being employed in a spectrum of locations in many reactors and other nuclear installations. This paper presents the current status of the use of mobile robots in the nuclear industry and describes currently contemplated missions, with examples, that are being or will be conducted on terrestrial surfaces, underwater, in pipeline locations, and through the air.

  13. Homography-based visual servo regulation of mobile robots.

    PubMed

    Fang, Yongchun; Dixon, Warren E; Dawson, Darren M; Chawda, Prakash

    2005-10-01

    A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.

  14. Teaching Robotics Software with the Open Hardware Mobile Manipulator

    ERIC Educational Resources Information Center

    Vona, M.; Shekar, N. H.

    2013-01-01

    The "open hardware mobile manipulator" (OHMM) is a new open platform with a unique combination of features for teaching robotics software and algorithms. On-board low- and high-level processors support real-time embedded programming and motor control, as well as higher-level coding with contemporary libraries. Full hardware designs and…

  15. An emergency response mobile robot for operations in combustible atmospheres

    NASA Technical Reports Server (NTRS)

    Stone, Henry W. (Inventor); Ohm, Timothy R. (Inventor)

    1993-01-01

    A mobile, self-powered, self-contained, and remote-controlled robot is presented. The robot is capable of safely operating in a combustible atmosphere and providing information about the atmosphere to the operator. The robot includes non-sparking and non-arcing electro-mechanical and electronic components designed to prevent the robot from igniting the combustible atmosphere. The robot also includes positively pressurized enclosures that house the electromechanical and electronic components of the robot and prevent intrusion of the combustible atmosphere into the enclosures. The enclosures are interconnected such that a pressurized gas injected into any one of the enclosures is routed to all the other enclosures through the interconnections. It is preferred that one or more sealed internal channels through structures intervening between the enclosures be employed. Pressure transducers for detecting if the pressure within the enclosures falls below a predetermined level are included. The robot also has a sensing device for determining the types of combustible substances in the surrounding atmosphere, as well as the concentrations of each type of substance relative to a pre-determined lower explosive limit (LEL). In addition, the sensing device can determine the percent level of oxygen present in the surrounding atmosphere.

  16. Emergency response mobile robot for operations in combustible atmospheres

    NASA Technical Reports Server (NTRS)

    Stone, Henry W. (Inventor); Ohm, Timothy R. (Inventor)

    1995-01-01

    A mobile, self-powered, self-contained, and remote-controlled robot is presented. The robot is capable of safely operating in a combustible atmosphere and providing information about the atmosphere to the operator. The robot includes non-sparking and non-arcing electro-mechanical and electronic components designed to prevent the robot from igniting the combustible atmosphere. The robot also includes positively pressurized enclosures that house the electromechanical and electronic components of the robot and prevent intrusion of the combustible atmosphere into the enclosures. The enclosures are interconnected such that a pressurized gas injected into any one of the enclosures is routed to all the other enclosures through the interconnections. It is preferred that one or more sealed internal channels through structures intervening between the enclosures be employed. Pressure transducers for detecting if the pressure within the enclosures falls below a predetermined level are included. The robot also has a sensing device for determining the types of combustible substances in the surrounding atmosphere, as well as the concentrations of each type of substance relative to a pre-determined lower explosive limit (LEL). In addition, the sensing device can determine the percent level of oxygen present in the surrounding atmosphere.

  17. Telepresence for mobile robots in nuclear environments

    NASA Astrophysics Data System (ADS)

    McKay, Mark D.; Anderson, Matthew O.

    1996-12-01

    A growing concern with the rapid advances in technology is robotic systems will become so complex that operators will be overwhelmed by the complexity and number of controls. Thus, there is a need within the remote and teleoperated robotic field for better man-machine interfaces. Telepresence attempts to bring real world senses to the operator, especially where the scale and orientation of the robot is so different from the scale of a human operator. This paper reports on research performed at the INEL which identified and evaluated current developments in telepresence best suited for nuclear applications by surveying of national laboratories, universities, and evaluation of commercial products available in industry. The resulting telepresence system, VirtualwindoW, attempts to minimize the complexity of robot controls and to give the operator the 'feel' of the environment without actually contacting items in the environment. The authors of this report recommend that a prolonged use study be conducted on the VirtualwindoW to determine and bench mark the length of time users can be safely exposed to this technology. In addition, it is proposed that a stand alone system be developed which combines the existing multi-computer platform into a single processor telepresence platform. The stand alone system would provide a standard camera interface and allow the VirtualwindoW to be ported to other telerobotic systems.

  18. Hardware Development for a Mobile Educational Robot.

    ERIC Educational Resources Information Center

    Mannaa, A. M.; And Others

    1987-01-01

    Describes the development of a robot whose mainframe is essentially transparent and walks on four legs. Discusses various gaits in four-legged motion. Reports on initial trials of a full-sized model without computer-control, including smoothness of motion and actual obstacle crossing features. (CW)

  19. System safety analysis of an autonomous mobile robot

    SciTech Connect

    Bartos, R.J.

    1994-08-01

    Analysis of the safety of operating and maintaining the Stored Waste Autonomous Mobile Inspector (SWAMI) II in a hazardous environment at the Fernald Environmental Management Project (FEMP) was completed. The SWAMI II is a version of a commercial robot, the HelpMate{trademark} robot produced by the Transitions Research Corporation, which is being updated to incorporate the systems required for inspecting mixed toxic chemical and radioactive waste drums at the FEMP. It also has modified obstacle detection and collision avoidance subsystems. The robot will autonomously travel down the aisles in storage warehouses to record images of containers and collect other data which are transmitted to an inspector at a remote computer terminal. A previous study showed the SWAMI II has economic feasibility. The SWAMI II will more accurately locate radioactive contamination than human inspectors. This thesis includes a System Safety Hazard Analysis and a quantitative Fault Tree Analysis (FTA). The objectives of the analyses are to prevent potentially serious events and to derive a comprehensive set of safety requirements from which the safety of the SWAMI II and other autonomous mobile robots can be evaluated. The Computer-Aided Fault Tree Analysis (CAFTA{copyright}) software is utilized for the FTA. The FTA shows that more than 99% of the safety risk occurs during maintenance, and that when the derived safety requirements are implemented the rate of serious events is reduced to below one event per million operating hours. Training and procedures in SWAMI II operation and maintenance provide an added safety margin. This study will promote the safe use of the SWAMI II and other autonomous mobile robots in the emerging technology of mobile robotic inspection.

  20. Searching Dynamic Agents with a Team of Mobile Robots

    PubMed Central

    Juliá, Miguel; Gil, Arturo; Reinoso, Oscar

    2012-01-01

    This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach. PMID:23012519

  1. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  2. Intelligent control and cooperation for mobile robots

    NASA Astrophysics Data System (ADS)

    Stingu, Petru Emanuel

    The topic discussed in this work addresses the current research being conducted at the Automation & Robotics Research Institute in the areas of UAV quadrotor control and heterogenous multi-vehicle cooperation. Autonomy can be successfully achieved by a robot under the following conditions: the robot has to be able to acquire knowledge about the environment and itself, and it also has to be able to reason under uncertainty. The control system must react quickly to immediate challenges, but also has to slowly adapt and improve based on accumulated knowledge. The major contribution of this work is the transfer of the ADP algorithms from the purely theoretical environment to the complex real-world robotic platforms that work in real-time and in uncontrolled environments. Many solutions are adopted from those present in nature because they have been proven to be close to optimal in very different settings. For the control of a single platform, reinforcement learning algorithms are used to design suboptimal controllers for a class of complex systems that can be conceptually split in local loops with simpler dynamics and relatively weak coupling to the rest of the system. Optimality is enforced by having a global critic but the curse of dimensionality is avoided by using local actors and intelligent pre-processing of the information used for learning the optimal controllers. The system model is used for constructing the structure of the control system, but on top of that the adaptive neural networks that form the actors use the knowledge acquired during normal operation to get closer to optimal control. In real-world experiments, efficient learning is a strong requirement for success. This is accomplished by using an approximation of the system model to focus the learning for equivalent configurations of the state space. Due to the availability of only local data for training, neural networks with local activation functions are implemented. For the control of a formation

  3. Worker selection of safe speed and idle condition in simulated monitoring of two industrial robots.

    PubMed

    Karwowski, W; Rahimi, M

    1991-05-01

    Industrial robots often operate at high speed, with unpredictable motion patterns and erratic idle times. Serious injuries and deaths have occurred due to operator misperception of these robot design and performance characteristics. The main objective of the research project was to study human perceptual aspects of hazardous robotics workstations. Two laboratory experiments were designed to investigate workers' perceptions of two industrial robots with different physical configurations and performance capabilities. Twenty-four subjects participated in the study. All subjects were chosen from local industries, and had had considerable exposure to robots and other automated equipment in their working experience. Experiment 1 investigated the maximum speed of robot arm motions that workers, who were experienced with operation of industrial robots, judged to be 'safe' for monitoring tasks. It was found that the selection of safe speed depends on the size of the robot and the speed with which the robot begins its operation. Speeds of less than 51 cm/s and 63 cm/s for large and small robots, respectively, were perceived as safe, i.e., ones that did not result in workers feeling uneasy or endangered when working in close proximity to the robot and monitoring its actions. Experiment 2 investigated the minimum value of robot idle time (inactivity) perceived by industrial workers as system malfunction, and an indication of the 'safe-to-approach' condition. It was found that idle times of 41 s and 28 s or less for the small and large robots, respectively, were perceived by workers to be a result of system malfunction. About 20% of the workers waited only 10 s or less before deciding that the robot had stopped because of system malfunction. The idle times were affected by the subjects' prior exposure to a simulated robot accident. Further interpretations of the results and suggestions for operational limitations of robot systems are discussed.

  4. Mobile Robotic Teams Applied to Precision Agriculture

    SciTech Connect

    M.D. McKay; M.O. Anderson; N.S. Flann; R.A. Kinoshita; R.W. Gunderson; W.D. Willis

    1999-04-01

    The Idaho National Engineering and Environmental Laboratory (INEEL) and Utah State University�s Center for Self-Organizing and Intelligent Systems (CSOIS) have developed a team of autonomous robotic vehicles applicable to precision agriculture. A unique technique has been developed to plan, coordinate, and optimize missions in large structured environments for these autonomous vehicles in real-time. Two generic tasks are supported: 1) Driving to a precise location, and 2) Sweeping an area while activating on-board equipment. Sensor data and task achievement data is shared among the vehicles enabling them to cooperatively adapt to changing environmental, vehicle, and task conditions. This paper discusses the development of the autonomous robotic team, details of the mission-planning algorithm, and successful field demonstrations at the INEEL.

  5. Mobile Robotic Teams Applied to Precision Agriculture

    SciTech Connect

    Anderson, Matthew Oley; Kinoshita, Robert Arthur; Mckay, Mark D; Willis, Walter David; Gunderson, R.W.; Flann, N.S.

    1999-04-01

    The Idaho National Engineering and Environmental Laboratory (INEEL) and Utah State University’s Center for Self-Organizing and Intelligent Systems (CSOIS) have developed a team of autonomous robotic vehicles applicable to precision agriculture. A unique technique has been developed to plan, coordinate, and optimize missions in large structured environments for these autonomous vehicles in realtime. Two generic tasks are supported: 1) Driving to a precise location, and 2) Sweeping an area while activating on-board equipment. Sensor data and task achievement data is shared among the vehicles enabling them to cooperatively adapt to changing environmental, vehicle, and task conditions. This paper discusses the development of the autonomous robotic team, details of the mission-planning algorithm, and successful field demonstrations at the INEEL.

  6. SIMON: A mobile robot for floor contamination surveys

    SciTech Connect

    Dudar, E.; Teese, G.; Wagner, D.

    1991-12-31

    The Robotics Development group at the Savannah River Site is developing an autonomous robot to perform radiological surveys of potentially contaminated floors. The robot scans floors at a speed of one-inch/second and stops, sounds an alarm, and flashes lights when contamination in a certain area is detected. The contamination of interest here is primarily alpha and beta-gamma. The contamination levels are low to moderate. The robot, a Cybermotion K2A, is radio controlled, uses dead reckoning to determine vehicle position, and docks with a charging station to replenish its batteries and calibrate its position. It has an ultrasonic collision avoidance system as well as two safety bumpers that will stop the robot`s motion when they are depressed. Paths for the robot are preprogrammed and the robot`s motion can be monitored on a remote screen which shows a graphical map of the environment. The radiation instrument being used is an Eberline RM22A monitor. This monitor is microcomputer based with a serial I/O interface for remote operation. Up to 30 detectors may be configured with the RM22A. For our purposes, two downward-facing gas proportional detectors are used to scan floors, and one upward-facing detector is used for radiation background compensation. SIMON is interfaced with the RM22A in such a way that it scans the floor surface at one-inch/second, and if contamination is detected, the vehicle stops, alarms, and activates a voice synthesizer. Future development includes using the contamination data collected to provide a graphical contour map of a contaminated area. 3 refs.

  7. Recognizing and interpreting gestures on a mobile robot

    SciTech Connect

    Kortenkamp, D.; Huber, E.; Bonasso, R.P.

    1996-12-31

    Gesture recognition is an important skill for robots that work closely with humans. Gestures help to clarify spoken commands and are a compact means of relaying geometric information. We have developed a real-time, three-dimensional gesture recognition system that resides on-board a mobile robot. Using a coarse three-dimensional model of a human to guide stereo measurements of body parts, the system is capable of recognizing six distinct gestures made by an unadorned human in an unaltered environment. An active vision approach focuses the vision system`s attention on small, moving areas of space to allow for frame rate processing even when the person and/or the robot are moving. This paper describes the gesture recognition system, including the coarse model and the active vision approach. This paper also describes how the gesture recognition system is integrated with an intelligent control architecture to allow for complex gesture interpretation and complex robot action. Results from experiments with an actual mobile robot are given.

  8. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  9. A natural-language interface to a mobile robot

    NASA Technical Reports Server (NTRS)

    Michalowski, S.; Crangle, C.; Liang, L.

    1987-01-01

    The present work on robot instructability is based on an ongoing effort to apply modern manipulation technology to serve the needs of the handicapped. The Stanford/VA Robotic Aid is a mobile manipulation system that is being developed to assist severely disabled persons (quadriplegics) in performing simple activities of everyday living in a homelike, unstructured environment. It consists of two major components: a nine degree-of-freedom manipulator and a stationary control console. In the work presented here, only the motions of the Robotic Aid's omnidirectional motion base have been considered, i.e., the six degrees of freedom of the arm and gripper have been ignored. The goal has been to develop some basic software tools for commanding the robot's motions in an enclosed room containing a few objects such as tables, chairs, and rugs. In the present work, the environmental model takes the form of a two-dimensional map with objects represented by polygons. Admittedly, such a highly simplified scheme bears little resemblance to the elaborate cognitive models of reality that are used in normal human discourse. In particular, the polygonal model is given a priori and does not contain any perceptual elements: there is no polygon sensor on board the mobile robot.

  10. Robust and efficient vision system for group of cooperating mobile robots with application to soccer robots.

    PubMed

    Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar

    2004-07-01

    In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.

  11. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  12. Multiagent collaboration for experimental calibration of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Vachon, Bertrand; Berge-Cherfaoui, Veronique

    1991-03-01

    This paper presents an action in mission SOCRATES whose aim is the development of a self-calibration method for an autonomous mobile robot. The robot has to determine the precise location of the coordinate system shared by its sensors. Knowledge of this system is a sine qua non condition for efficient multisensor fusion and autonomous navigation in an unknown environment. But, as perceptions and motions are not accurate, this knowledge can only be achieved by multisensor fusion. The application described highlights this kind of problem. Multisensor fusion is used here especially in its symbolic aspect. Useful knowledge includes both numerous data coming from various sensors and suitable ways to process these data. A blackboard architecture has been chosen to manage useful information. Knowledge sources are called agents and the implement physical sensors (perceptors or actuators) as well as logical sensors (high level data processors). The problem to solve is self- calibration which includes the determination of the coordinate system R of the robot and the transformations necessary to convert data from sensor reference to R. The origin of R has been chosen to be O, the rotation center of the robot. As its genuine location may vary due to robot or ground characteristics, an experimental determination of O is attempted. A strategy for measuring distances in approximate positions is proposed. This strategy must take into account the fact that motions of the robot as well as perceptions may be inaccurate. Results obtained during experiments and future extensions of the system are discussed.

  13. Some Novel Design Principles for Collective Behaviors in Mobile Robots

    SciTech Connect

    OSBOURN, GORDON C.

    2002-09-01

    We present a set of novel design principles to aid in the development of complex collective behaviors in fleets of mobile robots. The key elements are: the use of a graph algorithm that we have created, with certain proven properties, that guarantee scalable local communications for fleets of arbitrary size; the use of artificial forces to simplify the design of motion control; the use of certain proximity values in the graph algorithm to simplify the sharing of robust navigation and sensor information among the robots. We describe these design elements and present a computer simulation that illustrates the behaviors readily achievable with these design tools.

  14. An autonomous mobil robot to perform waste drum inspections

    SciTech Connect

    Peterson, K.D.; Ward, C.R.

    1994-03-01

    A mobile robot is being developed by the Savannah River Technology Center (SRTC) Robotics Group of Westinghouse Savannah River company (WSRC) to perform mandated inspections of waste drums stored in warehouse facilities. The system will reduce personnel exposure and create accurate, high quality documentation to ensure regulatory compliance. Development work is being coordinated among several DOE, academic and commercial entities in accordance with DOE`s technology transfer initiative. The prototype system was demonstrated in November of 1993. A system is now being developed for field trails at the Fernald site.

  15. Locomotion of an all-terrain mobile robot

    NASA Astrophysics Data System (ADS)

    Iagolnitzer, M.; Richard, F.; Samson, J. F.; Tournassoud, P.

    The authors introduce a framework and prospective solutions for intelligent locomotion, defined as the ability for a mobile robot to cross over obstacles along a path roughly determined either through teleoperation or by a navigation path-finder. Then, they present a simple but efficient control scheme derived from these concepts, taking into account ground clearance, vehicle safety, and possible occlusions in the vision field. This control scheme is applied to Rami, a four tiltable track robot equipped with force sensors, an inertial reference system, a laser-stripe range finder, and extensive real-time computing facilities based on a decentralized architecture.

  16. Active object programming for military autonomous mobile robot software prototyping

    NASA Astrophysics Data System (ADS)

    Cozien, Roger F.

    2001-10-01

    While designing mobile robots, we do think that the prototyping phase is really critical. Good and clever choices have to be made. Indeed, we may not easily upgrade such robots, and most of all, when the robot is on its own, any change in both the software and the physical body is going to be very difficult, if not impossible. Thus, a great effort has to be made when prototyping the robot. Furthermore, I think that the kind of programming is very important. If your programming model is not expressive enough, you may experience a great deal of difficulties to add all the features you want, in order to give your robot reactiveness and decision making autonomy. Moreover, designing, and prototyping the on-board software of a reactive robot brings other difficulties. A reactive robot does not include any matter of rapidity. A reactive system is a system able to respond to a huge panel of situations of which it does not have the schedule. In other words, for instance, the robot does not know when a particular situation may occur, and overall, what it would be doing at this time, and what would be its internal state. This kind of robot must be able to take a decision and to act even if they do not have all the contextual information. To do so, we use a computer language named oRis featuring object and active object oriented programming, but also parallel and dynamic code, (the code can be changed during its own execution). This last point has been made possible because oRis is fully interpreted. However oRis may call fully compiled code, but also Prolog and Java code. An oRis program may be distributed on several computers using TCP/IP network connections. The main issue in this paper is to show how active objet oriented programming, as a modern extension of object oriented programming, may help us in designing autonomous mobile robots. Based on a fully parallel software programming, an active object code allows us to give many features to a robot, and to easily solve

  17. Active objects programming for military autonomous mobile robots software prototyping

    NASA Astrophysics Data System (ADS)

    Cozien, Roger F.

    2001-09-01

    While designing mobile robots, we do think that the prototyping phase is really critical. Good and clever choices have to be made. Indeed, we may not easily upgrade such robots, and most of all, when the robot is on its own, any change in both the software and the physical body is going to be very difficult, if not impossible. Thus, a great effort has to be made when prototyping the robot. Furthermore, I think that the kind of programming is very important. If your programming model is not expressive enough, you may experience a great deal of difficulties to add all the features you want, in order to give your robot reactiveness and decision making autonomy. Moreover, designing, and prototyping the on-board software of a reactive robot brings other difficulties. A reactive robot does not include any matter of rapidity. A reactive system is a system able to respond to a huge pannel of situations of which it does not have the schedule. In other words, for instance, the robot does not know when a particular situation may occur, and overall, what it would be doing at this time, and what would be its internal state. This kind of robot must be able to take a decision and to act even if they do not have all the contextual information. To do so, we use a computer language named oRis featuring object and active object oriented programming, but also parallel and dynamic code, (the code can be changed during its own execution). This last point has been made possible because oRis is fully interpreted. However oRis may call fully compiled code, but also Prolog and Java code. An oRis program may be distributed on several computers using TCP/IP network connections. The main issue in this paper is to show how active objet oriented programming, as a modern extension of object oriented programming, may help us in designing autonomous mobile robots. Based on a fully parallel software programming, an active object code allows us to give many features to a robot, and to easily solve

  18. Mobile robots: An assessment of business opportunities in an emerging industry

    SciTech Connect

    Miller, R.K.

    1987-01-01

    The mobile robotics industry is one that is certain to reach the billion-dollar level in the early 1990s. The authors' analysis finds that at least eight areas of application have the potential of exceeding the manufacturing AGVS market by the turn of the century: service and maintenance robots, medical robots, agricultural robots, military robots, office automation, electric utilities, space robots, and construction/mining. This report describes these and other applications, and reviews the mobile robotic products of 45 companies. Leading research is also assessed, and a market forecast is presented.

  19. Intelligent control in mobile robotics: the PANORAMA project

    NASA Astrophysics Data System (ADS)

    Greenway, Phil

    1994-03-01

    The European Community's strategic research initiative in information technology has been in place for seven years. A good example of the pan-European collaborative projects conducted under this initiative is PANORAMA: Perception and Navigation for Autonomous Mobile Robot Applications. This four-and-a-half-year project, completed in October 1993, aimed to prove the feasibility of an autonomous mobile robotic system replacing a human-operated vehicle working outdoors in a partially structured environment. The autonomous control of a mobile rock drilling machine was chosen as a challenging and representative test scenario. This paper presents an overview of intelligent mobile robot control architectures. Goals and objectives of the project are described, together with the makeup of the consortium and the roles of the members within it. The main technical achievements from PANORAMA are then presented, with emphasis given to the problems of realizing intelligent control. In particular, the planning and replanning of a mission, and the corresponding architectural choices and infrastructure required to support the chosen task oriented approach, are discussed. Specific attention is paid to the functional decomposition of the system, and how the requirements for `intelligent control' impact on the organization of the identified system components. Future work and outstanding problems are considered in some concluding remarks.

  20. Embodied Computation: An Active-Learning Approach to Mobile Robotics Education

    ERIC Educational Resources Information Center

    Riek, L. D.

    2013-01-01

    This paper describes a newly designed upper-level undergraduate and graduate course, Autonomous Mobile Robots. The course employs active, cooperative, problem-based learning and is grounded in the fundamental computational problems in mobile robotics defined by Dudek and Jenkin. Students receive a broad survey of robotics through lectures, weekly…

  1. RoCoMAR: Robots' Controllable Mobility Aided Routing and Relay Architecture for Mobile Sensor Networks

    PubMed Central

    Van Le, Duc; Oh, Hoon; Yoon, Seokhoon

    2013-01-01

    In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay. PMID:23881134

  2. RoCoMAR: robots' controllable mobility aided routing and relay architecture for mobile sensor networks.

    PubMed

    Le, Duc Van; Oh, Hoon; Yoon, Seokhoon

    2013-07-05

    In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay.

  3. Vision based object pose estimation for mobile robots

    NASA Technical Reports Server (NTRS)

    Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry

    1994-01-01

    Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.

  4. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  5. Real-time optical flow estimation on a GPU for a skied-steered mobile robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2016-04-01

    Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.

  6. Classifying and recovering from sensing failures in autonomous mobile robots

    SciTech Connect

    Murphy, R.R.; Hershberger, D.

    1996-12-31

    This paper presents a characterization of sensing failures in autonomous mobile robots, a methodology for classification and recovery, and a demonstration of this approach on a mobile robot performing landmark navigation. A sensing failure is any event leading to defective perception, including sensor malfunctions, software errors, environmental changes, and errant expectations. The approach demonstrated in this paper exploits the ability of the robot to interact with its environment to acquire additional information for classification (i.e., active perception). A Generate and Test strategy is used to generate hypotheses to explain the symptom resulting from the sensing failure. The recovery scheme replaces the affected sensing processes with an alternative logical sensor. The approach is implemented as the Sensor Fusion Effects Exception Handling (SFX-EH) architecture. The advantages of SFX-EH are that it requires only a partial causal model of sensing failure, the control scheme strives for a fast response, tests are constructed so as to prevent confounding from collaborating sensors which have also failed, and the logical sensor organization allows SFX-EH to be interfaced with the behavioral level of existing robot architectures.

  7. The Challenge of Planning and Execution for Spacecraft Mobile Robots

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.; Gawdiak, Yuri; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The need for spacecraft mobile robots continues to grow. These robots offer the potential to increase the capability, productivity, and duration of space missions while decreasing mission risk and cost. Spacecraft Mobile Robots (SMRs) can serve a number of functions inside and outside of spacecraft from simpler tasks, such as performing visual diagnostics and crew support, to more complex tasks, such as performing maintenance and in-situ construction. One of the predominant challenges to deploying SMRs is to reduce the need for direct operator interaction. Teleoperation is often not practical due to the communication latencies incurred because of the distances involved and in many cases a crewmember would directly perform a task rather than teleoperate a robot to do it. By integrating a mixed-initiative constraint-based planner with an executive that supports adjustably autonomous control, we intend to demonstrate the feasibility of autonomous SMRs by deploying one inside the International Space Station (ISS) and demonstrate in simulation one that operates outside of the ISS. This paper discusses the progress made at NASA towards this end, the challenges ahead, and concludes with an invitation to the research community to participate.

  8. An iterative learning controller for nonholonomic mobile robots

    SciTech Connect

    Oriolo, G.; Panzieri, S.; Ulivi, G.

    1998-09-01

    The authors present an iterative learning controller that applies to nonholonomic mobile robots, as well as other systems that can be put in chained form. The learning algorithm exploits the fact that chained-form. The learning algorithm exploits the fact that chained-form systems are linear under piecewise-constant inputs. The proposed control scheme requires the execution of a small number of experiments to drive the system to the desired state in finite time, with nice convergence and robustness properties with respect to modeling inaccuracies as well as disturbances. To avoid the necessity of exactly reinitializing the system at each iteration, the basic method is modified so as to obtain a cyclic controller, by which the system is cyclically steered through an arbitrary sequence of states. As a case study, a carlike mobile robot is considered. Both simulation and experimental results are reported to show the performance of the method.

  9. A user interface for mobile robotized tele-echography

    NASA Astrophysics Data System (ADS)

    Triantafyllidis, G. A.; Thomos, N.; Canero, C.; Vieyres, P.; Strintzis, M. G.

    2006-12-01

    Ultrasound imaging allows the evaluation of the degree of emergency of a patient. However, in many situations no experienced sonographer is available to perform such echography. To cope with this issue, the OTELO project "mObile Tele-Echography using an ultra-Light rObot" (OTELO) aims to develop a fully integrated end-to-end mobile tele-echography system using an ultralight, remotely controlled six degree-of-freedom (DOF) robot. In this context, this paper deals with the user interface environment of the OTELO system, composed by the following parts: an ultrasound video transmission system providing real-time images of the scanned area at each moment, an audio/video conference to communicate with the paramedical assistant and the patient, and finally a virtual reality environment, providing visual and haptic feedback to the expert, while capturing the expert's hand movements with a one-DOF hand free input device.

  10. Dynamic map building for an autonomous mobile robot

    SciTech Connect

    Leonard, J.J.; Durrant-Whyte, H.F. ); Cox, I.J. )

    1992-08-01

    This article presents an algorithm for autonomous map building and maintenance for a mobile robot. The authors believe that mobile robot navigation can be treated as a problem of tracking geometric features that occur naturally in the environment. They represent each feature in the map by a location estimate (the feature state vector) and two distinct measures of uncertainty: a covariance matrix to represent uncertainty in feature location, and a credibility measure to represent their belief in the validity of the feature. During each position update cycle, predicted measurements are generated for each geometric feature in the map and compared with actual sensor observations. Successful matches cause a feature's credibility to be increased. Unpredicted observations are used to initialize new geometric features, while unobserved predictions result in a geometric feature's credibility being decreased. They also describe experimental results obtained with the algorithm that demonstrate successful map building using real sonar data.

  11. Hierarchical loop detection for mobile outdoor robots

    NASA Astrophysics Data System (ADS)

    Lang, Dagmar; Winkens, Christian; Häselich, Marcel; Paulus, Dietrich

    2012-01-01

    Loop closing is a fundamental part of 3D simultaneous localization and mapping (SLAM) that can greatly enhance the quality of long-term mapping. It is essential for the creation of globally consistent maps. Conceptually, loop closing is divided into detection and optimization. Recent approaches depend on a single sensor to recognize previously visited places in the loop detection stage. In this study, we combine data of multiple sensors such as GPS, vision, and laser range data to enhance detection results in repetitively changing environments that are not sufficiently explained by a single sensor. We present a fast and robust hierarchical loop detection algorithm for outdoor robots to achieve a reliable environment representation even if one or more sensors fail.

  12. Fuzzy Visual Path Following by a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Hamissi, A.; Bazoula, A.

    2008-06-01

    We present in this work a variant of a visual navigation method developed for path following by a nonholonomic mobile robot moving in an environment free of obstacles. Only an embedded CCD camera is used for perception. The integration of perception and action leads us to develop firstly a method of extraction of the useful information from each acquired image, secondly a control approach using fuzzy logic.

  13. Wheel rolling constraints and slip in mobile robots

    SciTech Connect

    Shekhar, S.

    1997-03-01

    It is widely accepted that dead reckoning based on the rolling with no slip condition on wheels is not a reliable method to ascertain the position and orientation of a mobile robot for any reasonable distance. We establish that wheel slip is inevitable under the dynamic model of motion using classical results on the accessibility and controllability in nonlinear control theory and an analytical model of rolling of two linearly elastic bodies.

  14. Wheel rolling constraints and slip in mobile robots

    SciTech Connect

    Shekhar, S.

    1996-06-01

    It is widely accepted that dead-reckoning based on the rolling with no-slip condition on the wheels is not a reliable method to ascertain the position and orientation of a mobile robot for any reasonable distance. The authors establish that wheel slip is inevitable under the dynamic model of motion using classical results on the accessibility and controllability in nonlinear control theory and an analytical model of rolling of two linearly elastic bodies.

  15. Model-based description of environment interaction for mobile robots

    NASA Astrophysics Data System (ADS)

    Borghi, Giuseppe; Ferrari, Carlo; Pagello, Enrico; Vianello, Marco

    1999-01-01

    We consider a mobile robot that attempts to accomplish a task by reaching a given goal, and interacts with its environment through a finite set of actions and observations. The interaction between robot and environment is modeled by Partially Observable Markov Decision Processes (POMDP). The robot takes its decisions in presence of uncertainty about the current state, by maximizing its reward gained during interactions with the environment. It is able to self-locate into the environment by collecting actions and perception histories during the navigation. To make the state estimation more reliable, we introduce an additional information in the model without adding new states and without discretizing the considered measures. Thus, we associate to the state transition probabilities also a continuous metric given through the mean and the variance of some significant sensor measurements suitable to be kept under continuous form, such as odometric measurements, showing that also such unreliable data can supply a great deal of information to the robot. The overall control system of the robot is structured as a two-levels layered architecture, where the low level implements several collision avoidance algorithms, while the upper level takes care of the navigation problem. In this paper, we concentrate on how to use POMDP models at the upper level.

  16. Avoiding moving obstacles by deviation from a mobile robot`s nominal path

    SciTech Connect

    Tsoularis, A.; Kambhampati, C.

    1999-05-01

    This paper deals with the problem of obstacle avoidance by deviation from the nominal path. Deviation is the only option available to the robot when the acceleration or deceleration plan on the nominal path fails to produce a viable avoidance strategy. The obstacle avoidance on the nominal path was dealt with in the authors` previous development, where the robot`s motion was only subject to an upper bound on its speed. When the robot has to deviate, its motion is subject to a maximum steering constraint and a maximum deviation constraint in addition to the maximum speed constraint. The problem is solved geometrically by identifying final states for the robot that are reachable, satisfy all the constraints, and guarantee collision avoidance. The final state-reachability conditions that the authors obtain in the process ensure that no unnecessary deviation plan is initiated. These conditions, along with the simplicity of the geometric arguments they employ, make the scheme an attractive option for on-line implementation. The only significant complexity arises when minimizing the performance index. They have suggested dynamic programming as an optimization took, but any other nonlinear optimization technique can be adopted.

  17. Robust feature detection using sonar sensors for mobile robots

    NASA Astrophysics Data System (ADS)

    Choi, Jinwoo; Ahn, Sunghwan; Chung, Wan Kyun

    2005-12-01

    Sonar sensor is an attractive tool for the SLAM of mobile robot because of their economic aspects. This cheap sensor gives relatively accurate range readings if disregarding angular uncertainty and specular reflections. However, these defects make feature detection difficult for the most part of the SLAM. This paper proposed a robust sonar feature detection algorithm. This algorithm gives feature detection methods for both point features and line features. The point feature detection method was based on the TBF scheme. Moreover, three additional processes improved the performance of feature detection as follows; 1) stable intersections, 2) efficient sliding window update and 3) removal of the false point features on the wall. The line feature detection method was based on the basic property of adjacent sonar sensors. Along the line feature, three adjacent sonar sensors gave similar range readings. Using this sensor property, it proposed a novel algorithm for line feature detection, which is simple and the feature can be obtained by using only current sensor data. The proposed feature detection algorithm gives a good solution for the SLAM of mobile robots because it gives accurate feature information for both the point and line features even with sensor errors. Furthermore, a sufficient number of features are available to correct mobile robot pose. Experimental results for point feature and line feature detection demonstrate the performance of the proposed algorithm in a home-like environment.

  18. SIMON: A mobile robot for floor contamination surveys

    SciTech Connect

    Dudar, E.; Teese, G.; Wagner, D.

    1991-01-01

    The Robotics Development group at the Savannah River Site is developing an autonomous robot to perform radiological surveys of potentially contaminated floors. The robot scans floors at a speed of one-inch/second and stops, sounds an alarm, and flashes lights when contamination in a certain area is detected. The contamination of interest here is primarily alpha and beta-gamma. The contamination levels are low to moderate. The robot, a Cybermotion K2A, is radio controlled, uses dead reckoning to determine vehicle position, and docks with a charging station to replenish its batteries and calibrate its position. It has an ultrasonic collision avoidance system as well as two safety bumpers that will stop the robot's motion when they are depressed. Paths for the robot are preprogrammed and the robot's motion can be monitored on a remote screen which shows a graphical map of the environment. The radiation instrument being used is an Eberline RM22A monitor. This monitor is microcomputer based with a serial I/O interface for remote operation. Up to 30 detectors may be configured with the RM22A. For our purposes, two downward-facing gas proportional detectors are used to scan floors, and one upward-facing detector is used for radiation background compensation. SIMON is interfaced with the RM22A in such a way that it scans the floor surface at one-inch/second, and if contamination is detected, the vehicle stops, alarms, and activates a voice synthesizer. Future development includes using the contamination data collected to provide a graphical contour map of a contaminated area. 3 refs.

  19. Navigation of Autonomous Mobile Robot under Decision-making Strategy tuned by Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Kamano, Takuya; Yasuno, Takashi; Suzuki, Takayuki; Harada, Hironobu

    This paper describes a novel application of genetic algorithm for navigation of an autonomous mobile robot (AMR) under unknown environments. In the navigation system, the AMR is controlled by the decision-making block, which consists of neural network. To achieve both successful navigation to the goal and the suitable obstacle avoidance, the connection weights of the neural network and speed gains for predefined actions are encoded as genotypes and are tuned simultaneously by genetic algorithm so that the static and dynamic danger-degrees, the energy consumption and the distance and direction errors decrease during the navigation. Experimental results demonstrate the validity of the proposed navigation system.

  20. Mobile robotic assistive balance trainer - an intelligent compliant and adaptive robotic balance assistant for daily living.

    PubMed

    Tiseo, Carlo; Lim, Zhen Yi; Shee, Cheng Yap; Ang, Wei Tech

    2014-01-01

    Balance control probably has the greatest impact on independence in activities of daily living (ADL), because it is a fundamental motor skill and prerequisite to the maintenance of a myriad of postures and mobile activities. We propose a new rehabilitation therapy to administer standing and mobile balance control training, enabled by a Mobile Robotic Assistive Balance Trainer (MRABT). The targeted group for this initial work is post stroke patients, although it can be extended to subjects with other neurological insults in the future. The proposed system consists of a mobile base and a parallel robotic arm which provides support to the patient at the hip. The compliant robotic arm with intelligent control algorithm will only provide support and assistance to the patient when the center of mass of the body deviates beyond the predefined safety boundary, mimicking the helping hands of a parent when a toddler learns to walk. In this paper, we present our initial work in the design and kinematic analysis of the system. PMID:25571190

  1. Mobile robotic assistive balance trainer - an intelligent compliant and adaptive robotic balance assistant for daily living.

    PubMed

    Tiseo, Carlo; Lim, Zhen Yi; Shee, Cheng Yap; Ang, Wei Tech

    2014-01-01

    Balance control probably has the greatest impact on independence in activities of daily living (ADL), because it is a fundamental motor skill and prerequisite to the maintenance of a myriad of postures and mobile activities. We propose a new rehabilitation therapy to administer standing and mobile balance control training, enabled by a Mobile Robotic Assistive Balance Trainer (MRABT). The targeted group for this initial work is post stroke patients, although it can be extended to subjects with other neurological insults in the future. The proposed system consists of a mobile base and a parallel robotic arm which provides support to the patient at the hip. The compliant robotic arm with intelligent control algorithm will only provide support and assistance to the patient when the center of mass of the body deviates beyond the predefined safety boundary, mimicking the helping hands of a parent when a toddler learns to walk. In this paper, we present our initial work in the design and kinematic analysis of the system.

  2. Optimal control of 2-wheeled mobile robot at energy performance index

    NASA Astrophysics Data System (ADS)

    Kaliński, Krzysztof J.; Mazur, Michał

    2016-03-01

    The paper presents the application of the optimal control method at the energy performance index towards motion control of the 2-wheeled mobile robot. With the use of the proposed method of control the 2-wheeled mobile robot can realise effectively the desired trajectory. The problem of motion control of mobile robots is usually neglected and thus performance of the realisation of the high level control tasks is limited.

  3. The WPI Autonomous Mobile Robot Project: A Progress Report

    NASA Astrophysics Data System (ADS)

    Green, Peter E.; Hall, Kyle S.

    1987-01-01

    This paper presents a report on the WPI autonomous mobile robot (WAMR). This robot is currently under development by the Intelligent Machines Project at WPI. Its purpose is to serve as a testbed for real-time artificial intelligence. WAMR is expected to find its way from one place in a building to another, avoiding people and obstacles enroute. It is given no a priori knowledge of the building, but must learn about its environment by goal-directed exploration. Design concepts and descriptions of the major items completed thus far are presented. WAMR is a self-contained, wheeled robot that uses evidence based techniques to reason about actions. The robot builds and continually updates a world model of its environment. This is done using a combination of ultrasonic and visual data. This world model is interpreted and movement plans are generated by a planner utilizing uses real-time incremental evidence techniques. These movement plans are then carried out by a hierarchical evidence-based adaptive controller. Two interesting features of the robot are the line imaging ultrasonic sensor and the video subsystem. The former uses frequency variation to form a line image of obstacles between one and twenty feet in front of the robot. The latter attempts to mimic the human eye using neural network pattern recognition techniques. Several items have been completed thus far. The paper describes some of these, including the multiprocessor navigator and non-skid motion control system, the ultrasonic line imager, the concepts of the vision system, and the computer hardware and software environment.

  4. Design, characterization and control of the Unique Mobility Corporation robot

    NASA Astrophysics Data System (ADS)

    Velasco, Virgilio B., Jr.; Newman, Wyatt S.; Steinetz, Bruce; Kopf, Carlo; Malik, John

    1994-05-01

    Space and mass are at a premium on any space mission, and thus any machinery designed for space use should be lightweight and compact, without sacrificing strength. It is for this reason that NASA/LeRC contracted Unique Mobility Corporation to exploit their novel actuator designs to build a robot that would advance the present state of technology with respect to these requirements. Custom-designed motors are the key feature of this robot. They are compact, high-performance dc brushless servo motors with a high pole count and low inductance, thus permitting high torque generation and rapid phase commutation. Using a custom-designed digital signal processor-based controller board, the pulse width modulation power amplifiers regulate the fast dynamics of the motor currents. In addition, the programmable digital signal processor (DSP) controller permits implementation of nonlinear compensation algorithms to account for motoring vs. regeneration, torque ripple, and back-EMF. As a result, the motors produce a high torque relative to their size and weight, and can do so with good torque regulation and acceptably high velocity saturation limits. This paper presents the Unique Mobility Corporation robot prototype: its actuators, its kinematic design, its control system, and its experimental characterization. Performance results, including saturation torques, saturation velocities and tracking accuracy tests are included.

  5. Probabilistic model for AGV mobile robot ultrasonic sensor

    NASA Astrophysics Data System (ADS)

    Liao, Xiaoqun; Cao, Ming; Cao, Jin; Hall, Ernest L.

    1999-08-01

    An autonomous guided vehicle is a multi-sensor mobile robot. The sensors of a multi-sensor robot system are characteristically complex and diverse. They supply observations, which are often difficult to compare or aggregate directly. To make efficient use of the sensor information, the capabilities of each sensor must be modeled to extract information form the environment. For this goal, a probability model of ultrasonic sensor (PMUS) is presented in this paper. The model provides a means of distributing decision making and integrating diverse opinions. Also, the paper illustrates that a series of performance factors affect the probability model as parameters. PMUS could be extended to other sensor as members of the multi-sensor team. Moreover, the sensor probability model explored is suitable for all multi-sensor mobile robots. It should provide a quantitative ability for analysis of sensor performance, and allow the development of robust decision procedures for integrating sensor information. The theoretical sensor model presented is a first step in understanding and expanding the performance of ultrasound systems. The significance of this paper lies in the theoretical integration of sensory information from the probabilistic point of view.

  6. Context recognition and situation assessment in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Yavnai, Arie

    1993-05-01

    The capability to recognize the operating context and to assess the situation in real-time is needed, if a high functionality autonomous mobile robot has to react properly and effectively to continuously changing situations and events, either external or internal, while the robot is performing its assigned tasks. A new approach and architecture for context recognition and situation assessment module (CORSA) is presented in this paper. CORSA is a multi-level information processing module which consists of adaptive decision and classification algorithms. It performs dynamic mapping from the data space to the context space, and dynamically decides on the context class. Learning mechanism is employed to update the decision variables so as to minimize the probability of misclassification. CORSA is embedded within the Mission Manager module of the intelligent autonomous hyper-controller (IAHC) of the mobile robot. The information regarding operating context, events and situation is then communicated to other modules of the IAHC where it is used to: (a) select the appropriate action strategy; (b) support the processes to arbitration and conflict resolution between reflexive behaviors and reasoning-driven behaviors; (c) predict future events and situations; and (d) determine criteria and priorities for planning, replanning, and decision making.

  7. Embodying a cognitive model in a mobile robot

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Lonsdale, Deryle

    2006-10-01

    The ADAPT project is a collaboration of researchers in robotics, linguistics and artificial intelligence at three universities to create a cognitive architecture specifically designed to be embodied in a mobile robot. There are major respects in which existing cognitive architectures are inadequate for robot cognition. In particular, they lack support for true concurrency and for active perception. ADAPT addresses these deficiencies by modeling the world as a network of concurrent schemas, and modeling perception as problem solving. Schemas are represented using the RS (Robot Schemas) language, and are activated by spreading activation. RS provides a powerful language for distributed control of concurrent processes. Also, The formal semantics of RS provides the basis for the semantics of ADAPT's use of natural language. We have implemented the RS language in Soar, a mature cognitive architecture originally developed at CMU and used at a number of universities and companies. Soar's subgoaling and learning capabilities enable ADAPT to manage the complexity of its environment and to learn new schemas from experience. We describe the issues faced in developing an embodied cognitive architecture, and our implementation choices.

  8. Navigating a Mobile Robot Across Terrain Using Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Howard, Ayanna; Bon, Bruce

    2003-01-01

    A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.

  9. A Mobile Robot Operation with Instruction of Neck Movement using Laser Pointer

    NASA Astrophysics Data System (ADS)

    Shibata, Satoru; Yamamoto, Tomonori; Jindai, Mitsuru

    A human-robot system in which a mobile robot follows the movement of the laser spot projected on the floor by the laser pointer attached at the human head is considered. Human gives instruction of desired movement to the omni-directional mobile robot by rotating his or her head. The mobile robot can realize intended movement by following the movement of the laser spot on the floor. By projecting an instructive point to be followed by the mobile robot, the user can clearly recognize the relation between the direction being faced and the desired position of the mobile robot. In addition, the user can convey a motion trajectory to the mobile robot continuously. Kansei transfer function is introduced between the instruction movement of the laser spot and following motion of the robot to realize psychologically acceptable motion of the robot. In addition, three modes, stopping mode, following mode, and autonomous motion mode to the target, are considered. The effectiveness of the proposed system was discussed experimentally, and confirmed by the smooth trajectory of the following motion of the mobile robot and good psychological evaluations.

  10. CoMRoS: Cooperative mobile robots Stuttgart

    SciTech Connect

    Braeunl, T.; Kalbacher, M.; Levi, P.; Mamier, G.

    1996-12-31

    Project CoMRoS has the goal to develop intelligent cooperating mobile robots. Several different vehicles are to solve a single task autonomously by exchanging plans without a central control. We use {open_quotes}Robuter II{close_quotes} vehicles from Robosoft France, adapted to our needs. The standard vehicle has very little local intelligence (VME bus system) and is controlled remotely by wireless Ethernet for sending steering commands and receiving sonar sensor data. A wireless video link is used to transmit camera images. Data exchange between vehicles is then performed among the corresponding workstations. The remote control is basically used to simplify testing and debugging of robot programs. However, each vehicle can also be driven completely autonomous by using a laptop PC.

  11. Positional estimation techniques for an autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Aggarwal, J. K.

    1990-01-01

    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented.

  12. Simulation of cooperating robot manipulators on a mobile platform

    NASA Technical Reports Server (NTRS)

    Murphy, Stephen H.; Wen, John Ting-Yung; Saridis, George N.

    1991-01-01

    The dynamic equations of motion are presented for two or more cooperating manipulators on a freely moving mobile platform. The system of cooperating robot manipulators forms a closed kinematic chain where the force of interaction must be included in the formulation of robot and platform dynamics. The formulation includes the full dynamic interactions from arms to platform and arm tip to arm tip, and the possible translation and rotation of the platform. The equations of motion are shown to be identical in structure to the fixed-platform cooperative manipulator dynamics. The number of DOFs of the system is sufficiently large to make recursive dynamic calculation methods potentially more efficient than closed-form solutions. A complete simulation with two 6-DOF manipulators of a free-floating platform is presented along a with a multiple-arm controller to position the common load.

  13. Detection of free spaces for mobile robot navigation

    NASA Astrophysics Data System (ADS)

    Azzizi, Norelhouda; Zaatri, Abdelouahab; Rahmani, Fouad Lazhar

    2014-10-01

    This work is situated within the framework of the semi-autonomous and autonomous navigation of mobile robots in unknown environments with obstacles occurrence. It is based on the implementation of a vision-based system using an embedded monocular CCD camera. The vision system is designed to dynamically determine the free space in which the robot can move without obstacle collisions. This system is composed of a sequel of image processing operations: contour detection by Canny's filter, connection of neighborhood pixels, elimination of small contours which are considered as noise. The free space is determined by analyzing the perceived area and checking the presence of obstacles. Finally, obstacle borders are delimited enabling to prevent obstacles. Some experimental results are presented to illustrate the effective possibility of use of our system.

  14. Mobile robot navigation using qualitative reasoning

    SciTech Connect

    Pin, F.G.; Watanabe, Yutaka.

    1993-01-01

    Vehicle control in a priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise. incomplete, or unreliable data. Forsuch systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips have been developed to add a fuzzy inferencing capability to real-time control systems. The use of these boards and anapproach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation are first discussed. We then describe how the human-like navigation schemes were implemented on a test-bed platform to investigate two control modes for driving a car in a priori unknown environments on the basis of sparse and imprecise sensor data. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both nodes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulationresults as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and methodologies.

  15. Mobile robot navigation using qualitative reasoning

    SciTech Connect

    Pin, F.G.; Watanabe, Yutaka

    1993-03-01

    Vehicle control in a priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise. incomplete, or unreliable data. Forsuch systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips have been developed to add a fuzzy inferencing capability to real-time control systems. The use of these boards and anapproach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation are first discussed. We then describe how the human-like navigation schemes were implemented on a test-bed platform to investigate two control modes for driving a car in a priori unknown environments on the basis of sparse and imprecise sensor data. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver`s aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both nodes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulationresults as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver`s aid using the new fuzzy inferencing hardware system and methodologies.

  16. Two-axis hydraulic joint for high speed, heavy lift robotic operations

    SciTech Connect

    Vaughn, M.R.; Robinett, R.D.; Phelan, J.R.; VanZuiden, D.M.

    1994-04-01

    A hydraulically driven universal joint was developed for a heavy lift, high speed nuclear waste remediation application. Each axis is driven by a simple hydraulic cylinder controlled by a jet pipe servovalve. Servovalve behavior is controlled by a force feedback control system, which damps the hydraulic resonance. A prototype single joint robot was built and tested. A two joint robot is under construction.

  17. Non linear predictive control of a LEGO mobile robot

    NASA Astrophysics Data System (ADS)

    Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.

    2014-10-01

    Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.

  18. Sonar Sensor Models and Their Application to Mobile Robot Localization

    PubMed Central

    Burguera, Antoni; González, Yolanda; Oliver, Gabriel

    2009-01-01

    This paper presents a novel approach to mobile robot localization using sonar sensors. This approach is based on the use of particle filters. Each particle is augmented with local environment information which is updated during the mission execution. An experimental characterization of the sonar sensors used is provided in the paper. A probabilistic measurement model that takes into account the sonar uncertainties is defined according to the experimental characterization. The experimental results quantitatively evaluate the presented approach and provide a comparison with other localization strategies based on both the sonar and the laser. Some qualitative results are also provided for visual inspection. PMID:22303171

  19. Sonar sensor models and their application to mobile robot localization.

    PubMed

    Burguera, Antoni; González, Yolanda; Oliver, Gabriel

    2009-01-01

    This paper presents a novel approach to mobile robot localization using sonar sensors. This approach is based on the use of particle filters. Each particle is augmented with local environment information which is updated during the mission execution. An experimental characterization of the sonar sensors used is provided in the paper. A probabilistic measurement model that takes into account the sonar uncertainties is defined according to the experimental characterization. The experimental results quantitatively evaluate the presented approach and provide a comparison with other localization strategies based on both the sonar and the laser. Some qualitative results are also provided for visual inspection.

  20. A novel sensor system for mobile robot using moire technique

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Cho, Hyungsuck

    2005-12-01

    Nowadays a major research issue of mobile robots is to develop a robust 3D environment sensing for navigation and task execution. To achieve this, a variety of techniques have been developed for the determination of the 3D scene geometric information such as stereo vision, laser structured light, laser range finder and so on. But these methods have many limitations. To overcome these limitations we introduce a new sensing algorithm, which is based on the moire technique and stereo vision. To verify the performance of this sensor system we conducted a series of simulation for various simple environments. The result shows the feasibility of successful perception with several environments.

  1. Autonomous mobile robot fast hybrid decision system DT-FAM based on laser system measurement LSM

    NASA Astrophysics Data System (ADS)

    Będkowski, Janusz; Jankowski, Stanisław

    2006-10-01

    In this paper the new intelligent data processing system for mobile robot is described. The robot perception uses the LSM - Laser System Measurement. The innovative fast hybrid decision system is based on fuzzy ARTMAP supported by decision tree. The virtual laboratory of robotics was implemented to execute experiments.

  2. A mobile robots experimental environment with event-based wireless communication.

    PubMed

    Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián

    2013-01-01

    An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented. PMID:23881139

  3. A Mobile Robots Experimental Environment with Event-Based Wireless Communication

    PubMed Central

    Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián

    2013-01-01

    An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented. PMID:23881139

  4. Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)

    NASA Technical Reports Server (NTRS)

    Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey

    1990-01-01

    Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.

  5. Sensor fusion by pseudo information measure: a mobile robot application.

    PubMed

    Asharif, Mohammad Reza; Moshiri, Behzad; HoseinNezhad, Reza

    2002-07-01

    In any autonomous mobile robot, one of the most important issues to be designed and implemented is environment perception. In this paper, a new approach is formulated in order to perform sensory data integration for generation of an occupancy grid map of the environment. This method is an extended version of the Bayesian fusion method for independent sources of information. The performance of the proposed method of fusion and its sensitivity are discussed. Map building simulation for a cylindrical robot with eight ultrasonic sensors and mapping implementation for a Khepera robot have been separately tried in simulation and experimental works. A new neural structure is introduced for conversion of proximity data that are given by Khepera IR sensors to occupancy probabilities. Path planning experiments have also been applied to the resulting maps. For each map, two factors are considered and calculated: the fitness and the augmented occupancy of the map with respect to the ideal map. The length and the least distance to obstacles were the other two factors that were calculated for the routes that are resulted by path planning experiments. Experimental and simulation results show that by using the new fusion formulas, more informative maps of the environment are obtained. By these maps more appropriate routes could be achieved. Actually, there is a tradeoff between the length of the resulting routes and their safety and by choosing the proper fusion function, this tradeoff is suitably tuned for different map building applications.

  6. Sensor fusion by pseudo information measure: a mobile robot application.

    PubMed

    Asharif, Mohammad Reza; Moshiri, Behzad; HoseinNezhad, Reza

    2002-07-01

    In any autonomous mobile robot, one of the most important issues to be designed and implemented is environment perception. In this paper, a new approach is formulated in order to perform sensory data integration for generation of an occupancy grid map of the environment. This method is an extended version of the Bayesian fusion method for independent sources of information. The performance of the proposed method of fusion and its sensitivity are discussed. Map building simulation for a cylindrical robot with eight ultrasonic sensors and mapping implementation for a Khepera robot have been separately tried in simulation and experimental works. A new neural structure is introduced for conversion of proximity data that are given by Khepera IR sensors to occupancy probabilities. Path planning experiments have also been applied to the resulting maps. For each map, two factors are considered and calculated: the fitness and the augmented occupancy of the map with respect to the ideal map. The length and the least distance to obstacles were the other two factors that were calculated for the routes that are resulted by path planning experiments. Experimental and simulation results show that by using the new fusion formulas, more informative maps of the environment are obtained. By these maps more appropriate routes could be achieved. Actually, there is a tradeoff between the length of the resulting routes and their safety and by choosing the proper fusion function, this tradeoff is suitably tuned for different map building applications. PMID:12160343

  7. Localization of Mobile Robots Using an Extended Kalman Filter in a LEGO NXT

    ERIC Educational Resources Information Center

    Pinto, M.; Moreira, A. P.; Matos, A.

    2012-01-01

    The inspiration for this paper comes from a successful experiment conducted with students in the "Mobile Robots" course in the fifth year of the integrated Master's program in the Department of Electrical and Computer Engineering, Faculty of Engineering, University of Porto (FEUP), Porto, Portugal. One of the topics in this Mobile Robots course is…

  8. Cooperative system and method using mobile robots for testing a cooperative search controller

    DOEpatents

    Byrne, Raymond H.; Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.

    2002-01-01

    A test system for testing a controller provides a way to use large numbers of miniature mobile robots to test a cooperative search controller in a test area, where each mobile robot has a sensor, a communication device, a processor, and a memory. A method of using a test system provides a way for testing a cooperative search controller using multiple robots sharing information and communicating over a communication network.

  9. Planetary exploration by a mobile robot: mission teleprogramming and autonomous navigation.

    NASA Astrophysics Data System (ADS)

    Chatila, R.; Lacroix, S.; Simeon, T.; Herrb, M.

    Sending mobile robots to accomplish planetary exploration missions is scientifically promising and technologically challenging. The authors present a complete approach that encompasses the major aspects involved in the design of a robotic system for planetary exploration. It includes mission teleprogramming and supervision at a ground station, and autonomous mission execution by the remote mobile robot. They have partially implemented and validated these concepts. Experimental results illustrate the approach and the results.

  10. Video rate color region segmentation for mobile robotic applications

    NASA Astrophysics Data System (ADS)

    de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline

    2005-08-01

    Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.

  11. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  12. Graphical analysis of power systems for mobile robotics

    NASA Astrophysics Data System (ADS)

    Raade, Justin William

    The field of mobile robotics places stringent demands on the power system. Energetic autonomy, or the ability to function for a useful operation time independent of any tether, refueling, or recharging, is a driving force in a robot designed for a field application. The focus of this dissertation is the development of two graphical analysis tools, namely Ragone plots and optimal hybridization plots, for the design of human scale mobile robotic power systems. These tools contribute to the intuitive understanding of the performance of a power system and expand the toolbox of the design engineer. Ragone plots are useful for graphically comparing the merits of different power systems for a wide range of operation times. They plot the specific power versus the specific energy of a system on logarithmic scales. The driving equations in the creation of a Ragone plot are derived in terms of several important system parameters. Trends at extreme operation times (both very short and very long) are examined. Ragone plot analysis is applied to the design of several power systems for high-power human exoskeletons. Power systems examined include a monopropellant-powered free piston hydraulic pump, a gasoline-powered internal combustion engine with hydraulic actuators, and a fuel cell with electric actuators. Hybrid power systems consist of two or more distinct energy sources that are used together to meet a single load. They can often outperform non-hybrid power systems in low duty-cycle applications or those with widely varying load profiles and long operation times. Two types of energy sources are defined: engine-like and capacitive. The hybridization rules for different combinations of energy sources are derived using graphical plots of hybrid power system mass versus the primary system power. Optimal hybridization analysis is applied to several power systems for low-power human exoskeletons. Hybrid power systems examined include a fuel cell and a solar panel coupled with

  13. Small, Untethered, Mobile Robots for Inspecting Gas Pipes

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian

    2003-01-01

    Small, untethered mobile robots denoted gas-pipe explorers (GPEXs) have been proposed for inspecting the interiors of pipes used in the local distribution natural gas. The United States has network of gas-distribution pipes with a total length of approximately 109 m. These pipes are often made of iron and steel and some are more than 100 years old. As this network ages, there is a need to locate weaknesses that necessitate repair and/or preventive maintenance. The most common weaknesses are leaks and reductions in thickness, which are caused mostly by chemical reactions between the iron in the pipes and various substances in soil and groundwater. At present, mobile robots called pigs are used to inspect and clean the interiors of gas-transmission pipelines. Some carry magnetic-flux-leakage (MFL) sensors for measuring average wall thicknesses, some capture images, and some measure sizes and physical conditions. The operating ranges of pigs are limited to fairly straight sections of wide transmission- type (as distinguished from distribution- type) pipes: pigs are too large to negotiate such obstacles as bends with radii comparable to or smaller than pipe diameters, intrusions of other pipes at branch connections, and reductions in diameter at valves and meters. The GPEXs would be smaller and would be able to negotiate sharp bends and other obstacles that typically occur in gas-distribution pipes.

  14. Perception for mobile robot navigation: A survey of the state of the art

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David

    1994-01-01

    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.

  15. Olfaction and hearing based mobile robot navigation for odor/sound source search.

    PubMed

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  16. Olfaction and hearing based mobile robot navigation for odor/sound source search.

    PubMed

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  17. A Miniature Mobile Robot for Navigation and Positioning on the Beating Heart

    PubMed Central

    Patronik, Nicholas A.; Ota, Takeyoshi; Zenati, Marco A.; Riviere, Cameron N.

    2010-01-01

    Robotic assistance enhances conventional endoscopy; yet, limitations have hindered its mainstream adoption for cardiac surgery. HeartLander is a miniature mobile robot that addresses several of these limitations by providing precise and stable access over the surface of the beating heart in a less-invasive manner. The robot adheres to the heart and navigates to any desired target in a semiautonomous fashion. The initial therapies considered for HeartLander generally require precise navigation to multiple surface targets for treatment. To balance speed and precision, we decompose any general target acquisition into navigation to the target region followed by fine positioning to each target. In closed-chest, beating-heart animal studies, we demonstrated navigation to targets located around the circumference of the heart, as well as acquisition of target patterns on the anterior and posterior surfaces with an average error of 1.7 mm. The average drift encountered during station-keeping was 0.7 mm. These preclinical results demonstrate the feasibility of precise semiautonomous delivery of therapy to the surface of the beating heart using HeartLander. PMID:20179783

  18. Development of an advanced mobile base for personal mobility and manipulation appliance generation II robotic wheelchair

    PubMed Central

    Wang, Hongwu; Candiotti, Jorge; Shino, Motoki; Chung, Cheng-Shiu; Grindle, Garrett G.; Ding, Dan; Cooper, Rory A.

    2013-01-01

    Background This paper describes the development of a mobile base for the Personal Mobility and Manipulation Appliance Generation II (PerMMA Gen II robotic wheelchair), an obstacle-climbing wheelchair able to move in structured and unstructured environments, and to climb over curbs as high as 8 inches. The mechanical, electrical, and software systems of the mobile base are presented in detail, and similar devices such as the iBOT mobility system, TopChair, and 6X6 Explorer are described. Findings The mobile base of PerMMA Gen II has two operating modes: “advanced driving mode” on flat and uneven terrain, and “automatic climbing mode” during stair climbing. The different operating modes are triggered either by local and dynamic conditions or by external commands from users. A step-climbing sequence, up to 0.2 m, is under development and to be evaluated via simulation. The mathematical model of the mobile base is introduced. A feedback and a feed-forward controller have been developed to maintain the posture of the passenger when driving over uneven surfaces or slopes. The effectiveness of the controller has been evaluated by simulation using the open dynamics engine tool. Conclusion Future work for PerMMA Gen II mobile base is implementation of the simulation and control on a real system and evaluation of the system via further experimental tests. PMID:23820149

  19. System design of a hand-held mobile robot for craniotomy.

    PubMed

    Kane, Gavin; Eggers, Georg; Boesecke, Robert; Raczkowsky, Jörg; Wörn, Heinz; Marmulla, Rüdiger; Mühling, Joachim

    2009-01-01

    This contribution reports the development and initial testing of a Mobile Robot System for Surgical Craniotomy, the Craniostar. A kinematic system based on a unicycle robot is analysed to provide local positioning through two spiked wheels gripping directly onto a patients skull. A control system based on a shared control system between both the Surgeon and Robot is employed in a hand-held design that is tested initially on plastic phantom and swine skulls. Results indicate that the system has substantially lower risk than present robotically assisted craniotomies, and despite being a hand-held mobile robot, the Craniostar is still capable of sub-millimetre accuracy in tracking along a trajectory and thus achieving an accurate transfer of pre-surgical plan to the operating room procedure, without the large impact of current medical robots based on modified industrial robots.

  20. Robotics

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.

    2007-01-01

    Lunar robotic functions include: 1. Transport of crew and payloads on the surface of the moon; 2. Offloading payloads from a lunar lander; 3. Handling the deployment of surface systems; with 4. Human commanding of these functions from inside a lunar vehicle, habitat, or extravehicular (space walk), with Earth-based supervision. The systems that will perform these functions may not look like robots from science fiction. In fact, robotic functions may be automated trucks, cranes and winches. Use of this equipment prior to the crew s arrival or in the potentially long periods without crews on the surface, will require that these systems be computer controlled machines. The public release of NASA's Exploration plans at the 2nd Space Exploration Conference (Houston, December 2006) included a lunar outpost with as many as four unique mobility chassis designs. The sequence of lander offloading tasks involved as many as ten payloads, each with a unique set of geometry, mass and interface requirements. This plan was refined during a second phase study concluded in August 2007. Among the many improvements to the exploration plan were a reduction in the number of unique mobility chassis designs and a reduction in unique payload specifications. As the lunar surface system payloads have matured, so have the mobility and offloading functional requirements. While the architecture work continues, the community can expect to see functional requirements in the areas of surface mobility, surface handling, and human-systems interaction as follows: Surface Mobility 1. Transport crew on the lunar surface, accelerating construction tasks, expanding the crew s sphere of influence for scientific exploration, and providing a rapid return to an ascent module in an emergency. The crew transport can be with an un-pressurized rover, a small pressurized rover, or a larger mobile habitat. 2. Transport Extra-Vehicular Activity (EVA) equipment and construction payloads. 3. Transport habitats and

  1. Beyond adaptive-critic creative learning for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Liao, Xiaoqun; Cao, Ming; Hall, Ernest L.

    2001-10-01

    Intelligent industrial and mobile robots may be considered proven technology in structured environments. Teach programming and supervised learning methods permit solutions to a variety of applications. However, we believe that to extend the operation of these machines to more unstructured environments requires a new learning method. Both unsupervised learning and reinforcement learning are potential candidates for these new tasks. The adaptive critic method has been shown to provide useful approximations or even optimal control policies to non-linear systems. The purpose of this paper is to explore the use of new learning methods that goes beyond the adaptive critic method for unstructured environments. The adaptive critic is a form of reinforcement learning. A critic element provides only high level grading corrections to a cognition module that controls the action module. In the proposed system the critic's grades are modeled and forecasted, so that an anticipated set of sub-grades are available to the cognition model. The forecasting grades are interpolated and are available on the time scale needed by the action model. The success of the system is highly dependent on the accuracy of the forecasted grades and adaptability of the action module. Examples from the guidance of a mobile robot are provided to illustrate the method for simple line following and for the more complex navigation and control in an unstructured environment. The theory presented that is beyond the adaptive critic may be called creative theory. Creative theory is a form of learning that models the highest level of human learning - imagination. The application of the creative theory appears to not only be to mobile robots but also to many other forms of human endeavor such as educational learning and business forecasting. Reinforcement learning such as the adaptive critic may be applied to known problems to aid in the discovery of their solutions. The significance of creative theory is that it

  2. Mechanical deployment system on aries an autonomous mobile robot

    SciTech Connect

    Rocheleau, D.N.

    1995-12-01

    ARIES (Autonomous Robotic Inspection Experimental System) is under development for the Department of Energy (DOE) to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. This paper focuses on the mechanical deployment system-referred to as the camera positioning system (CPS)-used in the project. The CPS is used for positioning four identical but separate camera packages consisting of vision cameras and other required sensors such as bar-code readers and light stripe projectors. The CPS is attached to the top of a mobile robot and consists of two mechanisms. The first is a lift mechanism composed of 5 interlocking rail-elements which starts from a retracted position and extends upward to simultaneously position 3 separate camera packages to inspect the top three drums of a column of four drums. The second is a parallelogram special case Grashof four-bar mechanism which is used for positioning a camera package on drums on the floor. Both mechanisms are the subject of this paper, where the lift mechanism is discussed in detail.

  3. Steerable vertical to horizontal energy transducer for mobile robots

    DOEpatents

    Spletzer, Barry L.; Fischer, Gary J.; Feddema, John T.

    2001-01-01

    The present invention provides a steerable vertical to horizontal energy transducer for mobile robots that less complex and requires less power than two degree of freedom tilt mechanisms. The present invention comprises an end effector that, when mounted with a hopping actuator, translates along axis (typically vertical) actuation into combined vertical and horizontal motion. The end effector, or foot, mounts with an end of the actuator that moves toward the support surface (typically a floor or the earth). The foot is shaped so that the first contact with the support surface is off the axis of the actuator. Off-axis contact with the support surface generates an on-axis force (typically resulting in vertical motion) and a moment orthogonal to the axis. The moment initiates a horizontal tumbling motion, and tilts the actuator so that its axis is oriented with a horizontal component and continued actuation generates both vertical and horizontal force.

  4. HAZBOT - A hazardous materials emergency response mobile robot

    NASA Technical Reports Server (NTRS)

    Stone, H. W.; Edmonds, G.

    1992-01-01

    The authors describe the progress that has been made towards the development of a mobile robot that can be used by hazardous materials emergency response teams to perform a variety of tasks including incident localization and characterization, hazardous material identification/classification, site surveillance and monitoring, and ultimately incident mitigation. In September of 1991, the HAZBOT II vehicle performed its first end-to-end demonstration involving a scenario in which the vehicle: navigated to the incident location from a distant (150-200 ft.) deployment site; entered a building through a door with thumb latch style handle and door closer; located and navigated to the suspected incident location (a chemical storeroom); unlocked and opened the storeroom's door; climbed over the storeroom's 12 in. high threshold to enter the storeroom; and located and identified a broken container of benzene.

  5. Discrete neural dynamic programming in wheeled mobile robot control

    NASA Astrophysics Data System (ADS)

    Hendzel, Zenon; Szuster, Marcin

    2011-05-01

    In this paper we propose a discrete algorithm for a tracking control of a two-wheeled mobile robot (WMR), using an advanced Adaptive Critic Design (ACD). We used Dual-Heuristic Programming (DHP) algorithm, that consists of two parametric structures implemented as Neural Networks (NNs): an actor and a critic, both realized in a form of Random Vector Functional Link (RVFL) NNs. In the proposed algorithm the control system consists of the DHP adaptive critic, a PD controller and a supervisory term, derived from the Lyapunov stability theorem. The supervisory term guaranties a stable realization of a tracking movement in a learning phase of the adaptive critic structure and robustness in face of disturbances. The discrete tracking control algorithm works online, uses the WMR model for a state prediction and does not require a preliminary learning. Verification has been conducted to illustrate the performance of the proposed control algorithm, by a series of experiments on the WMR Pioneer 2-DX.

  6. Global path planning of mobile robots using a memetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Zexuan; Wang, Fangxiao; He, Shan; Sun, Yiwen

    2015-08-01

    In this paper, a memetic algorithm for global path planning (MAGPP) of mobile robots is proposed. MAGPP is a synergy of genetic algorithm (GA) based global path planning and a local path refinement. Particularly, candidate path solutions are represented as GA individuals and evolved with evolutionary operators. In each GA generation, the local path refinement is applied to the GA individuals to rectify and improve the paths encoded. MAGPP is characterised by a flexible path encoding scheme, which is introduced to encode the obstacles bypassed by a path. Both path length and smoothness are considered as fitness evaluation criteria. MAGPP is tested on simulated maps and compared with other counterpart algorithms. The experimental results demonstrate the efficiency of MAGPP and it is shown to obtain better solutions than the other compared algorithms.

  7. Calibration and control for range imaging in mobile robot navigation

    SciTech Connect

    Dorum, O.H.; Hoover, A.; Jones, J.P.

    1994-06-01

    This paper addresses some issues in the development of sensor-based systems for mobile robot navigation which use range imaging sensors as the primary source for geometric information about the environment. In particular, we describe a model of scanning laser range cameras which takes into account the properties of the mechanical system responsible for image formation and a calibration procedure which yields improved accuracy over previous models. In addition, we describe an algorithm which takes the limitations of these sensors into account in path planning and path execution. In particular, range imaging sensors are characterized by a limited field of view and a standoff distance -- a minimum distance nearer than which surfaces cannot be sensed. These limitations can be addressed by enriching the concept of configuration space to include information about what can be sensed from a given configuration, and using this information to guide path planning and path following.

  8. Speeding up the learning of robot kinematics through function decomposition.

    PubMed

    Ruiz de Angulo, Vicente; Torras, Carme

    2005-11-01

    The main drawback of using neural networks or other example-based learning procedures to approximate the inverse kinematics (IK) of robot arms is the high number of training samples (i.e., robot movements) required to attain an acceptable precision. We propose here a trick, valid for most industrial robots, that greatly reduces the number of movements needed to learn or relearn the IK to a given accuracy. This trick consists in expressing the IK as a composition of learnable functions, each having half the dimensionality of the original mapping. Off-line and on-line training schemes to learn these component functions are also proposed. Experimental results obtained by using nearest neighbors and parameterized self-organizing map, with and without the decomposition, show that the time savings granted by the proposed scheme grow polynomially with the precision required.

  9. The Embudito Mission: A Case Study of the Systematics of Autonomous Ground Mobile Robots

    SciTech Connect

    EICKER,PATRICK J.

    2001-02-01

    Ground mobile robots are much in the mind of defense planners at this time, being considered for a significant variety of missions with a diversity ranging from logistics supply to reconnaissance and surveillance. While there has been a very large amount of basic research funded in the last quarter century devoted to mobile robots and their supporting component technologies, little of this science base has been fully developed and deployed--notable exceptions being NASA's Mars rover and several terrestrial derivatives. The material in this paper was developed as a first exemplary step in the development of a more systematic approach to the R and D of ground mobile robots.

  10. Distributed consensus-based formation control for multiple nonholonomic mobile robots with a specified reference trajectory

    NASA Astrophysics Data System (ADS)

    Peng, Zhaoxia; Wen, Guoguang; Rahmani, Ahmed; Yu, Yongguang

    2015-06-01

    In this paper, the distributed formation control problem for multiple nonholonomic mobile robots using consensus-based approach is considered. A transformation is given to convert the formation control problem for multiple nonholonomic mobile robots into a state consensus problem. Distributed control laws are developed for achieving the formation control objectives: a group of nonholonomic mobile robots at least exponentially converge to a desired geometric pattern with its centroid moving along the specified reference trajectory. Rigorous proofs are provided by using graph, matrix , and Lyapunov theories. Simulations are also given to verify the effectiveness of the theoretical results.

  11. A cognitive approach to vision for a mobile robot

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both

  12. The NavBelt--a computerized travel aid for the blind based on mobile robotics technology.

    PubMed

    Shoval, S; Borenstein, J; Koren, Y

    1998-11-01

    This paper presents a new concept for a travel aid for the blind. A prototype device, called the NavBelt, was developed to test this concept. The device can be used as a primary or secondary aid, and consists of a portable computer, ultrasonic sensors, and stereophonic headphones. The computer applies navigation and obstacle avoidance technologies that were developed originally for mobile robots. The computer then uses a stereophonic imaging technique to process the signals from the ultrasonic sensors and relays their information to the user via stereophonic headphones. The user can interpret the information as an acoustic "picture" of the surroundings, or, depending on the operational mode, as the recommended travel direction. The acoustic signals are transmitted as discrete beeps or continuous sounds. Experimental results with the NavBelt simulator and a portable prototype show that users can travel safely in an unfamiliar and cluttered environment at speeds of up to 0.8 m/s. PMID:9805836

  13. The NavBelt--a computerized travel aid for the blind based on mobile robotics technology.

    PubMed

    Shoval, S; Borenstein, J; Koren, Y

    1998-11-01

    This paper presents a new concept for a travel aid for the blind. A prototype device, called the NavBelt, was developed to test this concept. The device can be used as a primary or secondary aid, and consists of a portable computer, ultrasonic sensors, and stereophonic headphones. The computer applies navigation and obstacle avoidance technologies that were developed originally for mobile robots. The computer then uses a stereophonic imaging technique to process the signals from the ultrasonic sensors and relays their information to the user via stereophonic headphones. The user can interpret the information as an acoustic "picture" of the surroundings, or, depending on the operational mode, as the recommended travel direction. The acoustic signals are transmitted as discrete beeps or continuous sounds. Experimental results with the NavBelt simulator and a portable prototype show that users can travel safely in an unfamiliar and cluttered environment at speeds of up to 0.8 m/s.

  14. Beamforming performance for a reconfigurable sparse array smart antenna system via multiple mobile robotic systems

    NASA Astrophysics Data System (ADS)

    Okamoto, Garret; Chen, Chih-Wei; Kitts, Christopher

    2010-04-01

    This paper describes and evaluates the beamforming performance for a flexible sparse array smart antenna system that can be reconfigured through the use of multiple mobile robots. Current robotic systems are limited because they cannot utilize beamforming due to their limited number of antennas and the high computational requirement of beamformers. The beamforming techniques used in this paper are unique because unlike current beamformers, the antennas in the sparse array are not connected together; instead, each robot has a single antenna. This work is made possible through breakthroughs by the authors on ultra-low computational complexity beamforming and multi-mobile robot cluster control. This new beamforming paradigm provides spatial reconfigurability of the array to control its location, size, inter-antenna spacing and geometry via multi-robot collaborative communications. Simulation results evaluate the effectiveness of smart antenna beamforming techniques when 1, 2, 3, 4, and 8 robots are utilized with and without interference signals present.

  15. Participatory design and validation of mobility enhancement robotic wheelchair.

    PubMed

    Daveler, Brandon; Salatin, Benjamin; Grindle, Garrett G; Candiotti, Jorge; Wang, Hongwu; Cooper, Rory A

    2015-01-01

    The design of the mobility enhancement robotic wheelchair (MEBot) was based on input from electric powered wheelchair (EPW) users regarding the conditions they encounter when driving in both indoor and outdoor environments that may affect their safety and result in them becoming immobilized, tipping over, or falling out of their wheelchair. Phase I involved conducting a participatory design study to understand the conditions and barriers EPW users found to be difficult to drive in/over. Phase II consisted of creating a computer-aided design (CAD) prototype EPW to provide indoor and outdoor mobility that addressed these conditions with advanced applications. Phase III involved demonstrating the advanced applications and gathering feedback from end users about the likelihood they would use the advanced applications. The CAD prototype incorporated advanced applications, including self-leveling, curb climbing, and traction control, that addressed the challenging conditions and barriers discussed with EPW users (n = 31) during the participatory design study. Feedback of the CAD design and applications in phase III from end users (n = 12) showed a majority would use self-leveling (83%), traction control (83%), and curb climbing (75%). The overall design of MEBot received positive feedback from EPW users. However, these opinions will need to be reevaluated through user trials as the design advances. PMID:26562492

  16. Object Detection Applied to Indoor Environments for Mobile Robot Navigation.

    PubMed

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-07-28

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.

  17. Object Detection Applied to Indoor Environments for Mobile Robot Navigation

    PubMed Central

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-01-01

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264

  18. Object Detection Applied to Indoor Environments for Mobile Robot Navigation.

    PubMed

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-01-01

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264

  19. The effect of waist twisting on walking speed of an amphibious salamander like robot

    NASA Astrophysics Data System (ADS)

    Yin, Xin-Yan; Jia, Li-Chao; Wang, Chen; Xie, Guang-Ming

    2016-06-01

    Amphibious salamanders often swing their waist to coordinate quadruped walking in order to improve their crawling speed. A robot with a swing waist joint, like an amphibious salamander, is used to mimic this locomotion. A control method is designed to allow the robot to maintain the rotational speed of its legs continuous and avoid impact between its legs and the ground. An analytical expression is established between the amplitude of the waist joint and the step length. Further, an optimization amplitude is obtained corresponding to the maximum stride. The simulation results based on automatic dynamic analysis of mechanical systems (ADAMS) and physical experiments verify the rationality and validity of this expression.

  20. From Sci-Fi to Reality--Mobile Robots Get the Job Done

    ERIC Educational Resources Information Center

    Roman, Harry T.

    2006-01-01

    Robots are simply computers that can interact with their environment. Some are fixed in place in industrial assembly plants for cars, appliances, micro electronic circuitry, and pharmaceuticals. Another important category of robots is the mobiles, machines that can be driven to the workplace, often designed for hazardous duty operation or…

  1. Tool for Experimenting with Concepts of Mobile Robotics as Applied to Children's Education

    ERIC Educational Resources Information Center

    Jimenez Jojoa, E. M.; Bravo, E. C.; Bacca Cortes, E. B.

    2010-01-01

    This paper describes the design and implementation of a tool for experimenting with mobile robotics concepts, primarily for use by children and teenagers, or by the general public, without previous experience in robotics. This tool helps children learn about science in an approachable and interactive way, using scientific research principles in…

  2. Robust finite-time tracking control of nonholonomic mobile robots without velocity measurements

    NASA Astrophysics Data System (ADS)

    Shi, Shang; Yu, Xin; Khoo, Suiyang

    2016-02-01

    The problem of robust finite-time trajectory tracking of nonholonomic mobile robots with unmeasurable velocities is studied. The contributions of the paper are that: first, in the case that the angular velocity of the mobile robot is unmeasurable, a composite controller including the observer-based partial state feedback control and the disturbance feed-forward compensation is designed, which guarantees that the tracking errors converge to zero in finite time. Second, if the linear velocity as well as the angular velocity of mobile robot is unmeasurable, with a stronger constraint, the finite-time trajectory tracking control of nonholonomic mobile robot is also addressed. Finally, the effectiveness of the proposed control laws is demonstrated by simulation.

  3. Mobile Robot Self-Localization by Matching Range Maps Using a Hausdorff Measure

    NASA Technical Reports Server (NTRS)

    Olson, C. F.

    1997-01-01

    This paper examines techniques for a mobile robot to perform self-localization in natural terrain by comparing a dense range map computed from stereo imagery to a range map in a known frame of reference.

  4. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  5. Automating CapCom Using Mobile Agents and Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhaus, Maarten; Alena, Richard L.; Berrios, Daniel; Dowding, John; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail

    2005-01-01

    We have developed and tested an advanced EVA communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. This system, called Mobile Agents (MA), is voice controlled and provides information verbally to the astronauts through programs called personal agents. The system partly automates the role of CapCom in Apollo-including monitoring and managing EVA navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. EVA data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in the context of use, including six years of ethnographic observation of field geology. Our approach is to develop automation that supports the human work practices, allowing people to do what they do well, and to work in ways they are most familiar. Field experiments in Utah have enabled empirically discovering requirements and testing alternative technologies and protocols. This paper reports on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to move and wait in various locations to serve as a relay on the wireless network. The MA system is applicable to many space work situations that involve creating and navigating from maps (including configuring equipment for local topology), interacting with piloted and unpiloted rovers, adapting to environmental conditions, and remote team collaboration involving people and robots.

  6. Integrated High-Speed Torque Control System for a Robotic Joint

    NASA Technical Reports Server (NTRS)

    Davis, Donald R. (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Valvo, Michael C. (Inventor); Askew, R. Scott (Inventor)

    2013-01-01

    A control system for achieving high-speed torque for a joint of a robot includes a printed circuit board assembly (PCBA) having a collocated joint processor and high-speed communication bus. The PCBA may also include a power inverter module (PIM) and local sensor conditioning electronics (SCE) for processing sensor data from one or more motor position sensors. Torque control of a motor of the joint is provided via the PCBA as a high-speed torque loop. Each joint processor may be embedded within or collocated with the robotic joint being controlled. Collocation of the joint processor, PIM, and high-speed bus may increase noise immunity of the control system, and the localized processing of sensor data from the joint motor at the joint level may minimize bus cabling to and from each control node. The joint processor may include a field programmable gate array (FPGA).

  7. Mobile robots II; Proceedings of the Meeting, Cambridge, MA, Nov. 5, 6, 1987

    SciTech Connect

    Wolfe, W.J.; Chun, W.H.

    1988-01-01

    Topics discussed are autonomous vehicle guidance, three-dimensional systems, the Mars rover, motion analysis, and planning and navigation. Particular papers are presented on a real-time system architecture for a mobile robot, distributed scene analysis for autonomous road vehicle guidance, the vision system for a Mars rover, the recovery of motion parameters using optical flow, and Prolog-based world models for mobile robot navigation.

  8. Exhaustive geographic search with mobile robots along space-filling curves

    SciTech Connect

    Spires, S.V.; Goldsmith, S.Y.

    1998-03-01

    Swarms of mobile robots can be tasked with searching a geographic region for targets of interest, such as buried land mines. The authors assume that the individual robots are equipped with sensors tuned to the targets of interest, that these sensors have limited range, and that the robots can communicate with one another to enable cooperation. How can a swarm of cooperating sensate robots efficiently search a given geographic region for targets in the absence of a priori information about the target`s locations? Many of the obvious approaches are inefficient or lack robustness. One efficient approach is to have the robots traverse a space-filling curve. For many geographic search applications, this method is energy-frugal, highly robust, and provides guaranteed coverage in a finite time that decreases as the reciprocal of the number of robots sharing the search task. Furthermore, it minimizes the amount of robot-to-robot communication needed for the robots to organize their movements. This report presents some preliminary results from applying the Hilbert space-filling curve to geographic search by mobile robots.

  9. Mobile robot worksystem (Rosie). Innovative technology summary report

    SciTech Connect

    1999-05-01

    The US Department of Energy (DOE) and the Federal Energy Technology Center (FETC) have developed a Large Scale Demonstration Project (LSDP) at the Chicago Pile-5 Research Reactor (CP-5) at Argonne National Laboratory-East (ANL). The objective of the LSDP is to demonstrate potentially beneficial Deactivation and Decommissioning (D and D) technologies in comparison with current baseline technologies. Rosie is a mobile robot worksystem developed for nuclear facilities D and D. Rosie performs mechanical dismantlement of radiologically contaminated structures by remotely deploying other tools or systems. At the CP-5 reactor site, Rosie is a mobile platform used to support reactor assembly demolition through its long reach, heavy lift capability and its deployment and positioning of a Kraft Predator dexterous manipulator arm. Rosie is a tethered, 50 m (165 ft) long, robotic system controlled via teleoperation from a control console that is located outside of the radiological containment area. The operator uses Rosie to move, lift or offload radioactive materials using its integral lifting hook or to position the Kraft Predator arm in locations where the arm can be used to dismantle parts of the CP-5 reactor. The specific operating areas were concentrated in two high radiation areas, one at the top of the reactor structure atop and within the reactor tank assembly and the second at a large opening on the west side of the reactor`s biological shield called the west thermal column. In the first of these areas, low level radioactive waste size previously segmented or dismantled by the Dual Arm Work Platform (DAWP) and placed into a steel drum or transfer can were moved to a staging area for manual packaging. In the latter area, the manipulator arm removed and transferred shielding blocks from the west thermal column area of the reactor into waste containers. Rosie can also deploy up to twelve remotely controlled television cameras, some with microphones, which can be used

  10. a Novel Obstacle Avoidance Approach for Mobile Robot System Including Target Capturing

    NASA Astrophysics Data System (ADS)

    El Kamel, M. A.; Beji, L.; Abichou, A.; Mammar, S.

    2009-03-01

    In this work we focused on the study of how a mobile robot with kinematic model can reach a target in an hostile environment, with one obstacle. We developed a new approach which breaks up the control law into the sum of a repulsive part ur and an attractive part ua to make a mobile robot converge to the target, while avoiding the obstacle. Our approach is based on Lyapunov technique and transformation to polar coordinates in order to built a control law without analytic switch among different cases of robot's navigation. Simulations are carried out for two scenarios of navigation for target capturing.

  11. Homophily and the speed of social mobilization: the effect of acquired and ascribed traits.

    PubMed

    Alstott, Jeff; Madnick, Stuart; Velu, Chander

    2014-01-01

    Large-scale mobilization of individuals across social networks is becoming increasingly prevalent in society. However, little is known about what affects the speed of social mobilization. Here we use a framed field experiment to identify and measure properties of individuals and their relationships that predict mobilization speed. We ran a global social mobilization contest and recorded personal traits of the participants and those they recruited. We studied the effects of ascribed traits (gender, age) and acquired traits (geography, and information source) on the speed of mobilization. We found that homophily, a preference for interacting with other individuals with similar traits, had a mixed role in social mobilization. Homophily was present for acquired traits, in which mobilization speed was faster when the recuiter and recruit had the same trait compared to different traits. In contrast, we did not find support for homophily for the ascribed traits. Instead, those traits had other, non-homophily effects: Females mobilized other females faster than males mobilized other males. Younger recruiters mobilized others faster, and older recruits mobilized slower. Recruits also mobilized faster when they first heard about the contest directly from the contest organization, and decreased in speed when hearing from less personal source types (e.g. family vs. media). These findings show that social mobilization includes dynamics that are unlike other, more passive forms of social activity propagation. These findings suggest relevant factors for engineering social mobilization tasks for increased speed.

  12. Homophily and the speed of social mobilization: the effect of acquired and ascribed traits.

    PubMed

    Alstott, Jeff; Madnick, Stuart; Velu, Chander

    2014-01-01

    Large-scale mobilization of individuals across social networks is becoming increasingly prevalent in society. However, little is known about what affects the speed of social mobilization. Here we use a framed field experiment to identify and measure properties of individuals and their relationships that predict mobilization speed. We ran a global social mobilization contest and recorded personal traits of the participants and those they recruited. We studied the effects of ascribed traits (gender, age) and acquired traits (geography, and information source) on the speed of mobilization. We found that homophily, a preference for interacting with other individuals with similar traits, had a mixed role in social mobilization. Homophily was present for acquired traits, in which mobilization speed was faster when the recuiter and recruit had the same trait compared to different traits. In contrast, we did not find support for homophily for the ascribed traits. Instead, those traits had other, non-homophily effects: Females mobilized other females faster than males mobilized other males. Younger recruiters mobilized others faster, and older recruits mobilized slower. Recruits also mobilized faster when they first heard about the contest directly from the contest organization, and decreased in speed when hearing from less personal source types (e.g. family vs. media). These findings show that social mobilization includes dynamics that are unlike other, more passive forms of social activity propagation. These findings suggest relevant factors for engineering social mobilization tasks for increased speed. PMID:24740123

  13. An Autonomous Mobile Robot Guided by a Chaotic True Random Bits Generator

    NASA Astrophysics Data System (ADS)

    Volos, Ch. K.; Kyprianidis, I. M.; Stouboulos, I. N.; Stavrinides, S. G.; Anagnostopoulos, A. N.

    In this work a robot's controller, which ensures chaotic motion to an autonomous mobile robot, is presented. This new strategy, which is very useful in many robotic missions, generates an unpredictable trajectory by using a chaotic path planning generator. The proposed generator produces a trajectory, which is the result of a sequence of planned target locations. In contrary with other similar works, this one is based on a new chaotic true random bits generator, which has as a basic feature the coexistence of two different synchronization phenomena between mutually coupled identical nonlinear circuits. Simulation tests confirm that the whole robot's workplace is covered with unpredictable way in a very satisfactory time.

  14. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  15. Methods in the analysis of mobile robots behavior in unstructured environment

    NASA Astrophysics Data System (ADS)

    Mondoc, Alina; Dolga, Valer; Gorie, Nina

    2012-11-01

    A mobile robot can be described as a mechatronic system that must execute an application in a working environment. From mechatronic concept, the authors highlight mechatronic system structure based on its secondary function. Mobile robot will move, either in a known environment - structured environment may be described in time by an appropriate mathematical model or in an unfamiliar environment - unstructured - the random aspects prevail. Starting from a point robot must reach a START STOP point in the context of functional constraints imposed on the one hand, the application that, on the other hand, the working environment. The authors focus their presentation on unstructured environment. In this case the evolution of mobile robot is based on obtaining information in the work environment, their processing and integration results in action strategy. Number of sensory elements used is subject to optimization parameter. Starting from a known structure of mobile robot, the authors analyze the possibility of developing a mathematical model variants mathematical contact wheel - ground. It analyzes the various types of soil and the possibility of obtaining a "signature" on it based on sensory information. Theoretical aspects of the problem are compared to experimental results obtained in robot evolution. The mathematical model of the robot system allowed the simulation environment and its evolution in comparison with the experimental results estimated.

  16. An intelligent hybrid behavior coordination system for an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Luo, Chaomin; Krishnan, Mohan; Paulik, Mark; Fallouh, Samer

    2013-12-01

    In this paper, development of a low-cost PID controller with an intelligent behavior coordination system for an autonomous mobile robot is described that is equipped with IR sensors, ultrasonic sensors, regulator, and RC filters on the robot platform based on HCS12 microcontroller and embedded systems. A novel hybrid PID controller and behavior coordination system is developed for wall-following navigation and obstacle avoidance of an autonomous mobile robot. Adaptive control used in this robot is a hybrid PID algorithm associated with template and behavior coordination models. Software development contains motor control, behavior coordination intelligent system and sensor fusion. In addition, the module-based programming technique is adopted to improve the efficiency of integrating the hybrid PID and template as well as behavior coordination model algorithms. The hybrid model is developed to synthesize PID control algorithms, template and behavior coordination technique for wall-following navigation with obstacle avoidance systems. The motor control, obstacle avoidance, and wall-following navigation algorithms are developed to propel and steer the autonomous mobile robot. Experiments validate how this PID controller and behavior coordination system directs an autonomous mobile robot to perform wall-following navigation with obstacle avoidance. Hardware configuration and module-based technique are described in this paper. Experimental results demonstrate that the robot is successfully capable of being guided by the hybrid PID controller and behavior coordination system for wall-following navigation with obstacle avoidance.

  17. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  18. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  19. A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Mohanty, Prases K.; Parhi, Dayal R.

    2014-12-01

    Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.

  20. Space-time modeling using environmental constraints in a mobile robot system

    NASA Technical Reports Server (NTRS)

    Slack, Marc G.

    1990-01-01

    Grid-based models of a robot's local environment have been used by many researchers building mobile robot control systems. The attraction of grid-based models is their clear parallel between the internal model and the external world. However, the discrete nature of such representations does not match well with the continuous nature of actions and usually serves to limit the abilities of the robot. This work describes a spatial modeling system that extracts information from a grid-based representation to form a symbolic representation of the robot's local environment. The approach makes a separation between the representation provided by the sensing system and the representation used by the action system. Separation allows asynchronous operation between sensing and action in a mobile robot, as well as the generation of a more continuous representation upon which to base actions.

  1. Mobile robot trajectory tracking using noisy RSS measurements: an RFID approach.

    PubMed

    Miah, M Suruz; Gueaieb, Wail

    2014-03-01

    Most RF beacons-based mobile robot navigation techniques rely on approximating line-of-sight (LOS) distances between the beacons and the robot. This is mostly performed using the robot's received signal strength (RSS) measurements from the beacons. However, accurate mapping between the RSS measurements and the LOS distance is almost impossible to achieve in reverberant environments. This paper presents a partially-observed feedback controller for a wheeled mobile robot where the feedback signal is in the form of noisy RSS measurements emitted from radio frequency identification (RFID) tags. The proposed controller requires neither an accurate mapping between the LOS distance and the RSS measurements, nor the linearization of the robot model. The controller performance is demonstrated through numerical simulations and real-time experiments. PMID:24268746

  2. Mobile robot trajectory tracking using noisy RSS measurements: an RFID approach.

    PubMed

    Miah, M Suruz; Gueaieb, Wail

    2014-03-01

    Most RF beacons-based mobile robot navigation techniques rely on approximating line-of-sight (LOS) distances between the beacons and the robot. This is mostly performed using the robot's received signal strength (RSS) measurements from the beacons. However, accurate mapping between the RSS measurements and the LOS distance is almost impossible to achieve in reverberant environments. This paper presents a partially-observed feedback controller for a wheeled mobile robot where the feedback signal is in the form of noisy RSS measurements emitted from radio frequency identification (RFID) tags. The proposed controller requires neither an accurate mapping between the LOS distance and the RSS measurements, nor the linearization of the robot model. The controller performance is demonstrated through numerical simulations and real-time experiments.

  3. Maps managing interface design for a mobile robot navigation governed by a BCI

    NASA Astrophysics Data System (ADS)

    Auat Cheeín, Fernando A.; Carelli, Ricardo; Cardoso Celeste, Wanderley; Freire Bastos, Teodiano; di Sciascio, Fernando

    2007-11-01

    In this paper, a maps managing interface is proposed. This interface is governed by a Brain Computer Interface (BCI), which also governs a mobile robot's movements. If a robot is inside a known environment, the user can load a map from the maps managing interface in order to navigate it. Otherwise, if the robot is in an unknown environment, a Simultaneous Localization and Mapping (SLAM) algorithm is released in order to obtain a probabilistic grid map of that environment. Then, that map is loaded into the map database for future navigations. While slamming, the user has a direct control of the robot's movements via the BCI. The complete system is applied to a mobile robot and can be also applied to an autonomous wheelchair, which has the same kinematics. Experimental results are also shown.

  4. Automating CapCom Using Mobile Agents and Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Richard L.; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail; Shum, Simon J. Buckingham; Shadbolt, Nigel; Rupert, Shannon M.

    2007-01-01

    Mobile Agents (MA) is an advanced Extra-Vehicular Activity (EVA) communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. MA is voice controlled and provides information verbally to the astronauts through programs called "personal agents." The system partly automates the role of CapCom in Apollo-including monitoring and managing navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. Data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in authentic work contexts, including six years of ethnographic observation of field geology. Analog field experiments in Utah enabled empirically discovering requirements and testing alternative technologies and protocols. We report on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to serve as a relay on the wireless network.

  5. Improvement in Visual Target Tracking for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Madison, Richard

    2006-01-01

    In an improvement of the visual-target-tracking software used aboard a mobile robot (rover) of the type used to explore the Martian surface, an affine-matching algorithm has been replaced by a combination of a normalized- cross-correlation (NCC) algorithm and a template-image-magnification algorithm. Although neither NCC nor template-image magnification is new, the use of both of them to increase the degree of reliability with which features can be matched is new. In operation, a template image of a target is obtained from a previous rover position, then the magnification of the template image is based on the estimated change in the target distance from the previous rover position to the current rover position (see figure). For this purpose, the target distance at the previous rover position is determined by stereoscopy, while the target distance at the current rover position is calculated from an estimate of the current pose of the rover. The template image is then magnified by an amount corresponding to the estimated target distance to obtain a best template image to match with the image acquired at the current rover position.

  6. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  7. An integrated collision prediction and avoidance scheme for mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1993-01-01

    A formulation that makes possible the integration of collision prediction and avoidance stages for mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the distance between the robot and a moving obstacle is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. A feedback control is developed and local asymptotic stability is proved if the velocity of the moving obstacle is bounded. Furthermore, a solution to the problem of inverse dynamics for the mobile robot is given. Simulation results verify the value of the proposed strategy.

  8. On terrain acquisition by a finite-sized mobile robot in plane

    SciTech Connect

    Rao, N.S.V.; Iyengar, S.S.; Jorgensen, C.C.; Weisbin, C.R.

    1987-01-01

    The terrain acquisition problem deals with the acquisition of the complete obstacle terrain model by a mobile robot placed in an unexplored terrain. This is a precursory problem to many well-known find-path and related problems which assume the availability of the complete terrain model. In this paper, we present a method for terrain acquisition by a finite sized robot operating in plane populated by an unknown (but, finite) number of polygonal obstacles; each obstacle is arbitrarily located and has unknown (but, finite) number of vertices. The robot progressively explores newer vertices of the obstacles using sensor equipment. We show that the complete terrain model will be built by the robot in a finite time. We also show that at any point of time the partially acquired terrain suffices for the navigation of the robot during the exploration. Hence we conclude that the navigation techniques for known terrains can be applied for the robot navigation during exploration.

  9. Distributed flow sensing for closed-loop speed control of a flexible fish robot.

    PubMed

    Zhang, Feitian; Lagor, Francis D; Yeo, Derrick; Washington, Patrick; Paley, Derek A

    2015-10-23

    Flexibility plays an important role in fish behavior by enabling high maneuverability for predator avoidance and swimming in turbulent flow. This paper presents a novel flexible fish robot equipped with distributed pressure sensors for flow sensing. The body of the robot is molded from soft, hyperelastic material, which provides flexibility. Its Joukowski-foil shape is conducive to modeling the fluid analytically. A quasi-steady potential-flow model is adopted for real-time flow estimation, whereas a discrete-time vortex-shedding flow model is used for higher-fidelity simulation. The dynamics for the flexible fish robot yield a reduced model for one-dimensional swimming. A recursive Bayesian filter assimilates pressure measurements to estimate flow speed, angle of attack, and foil camber. The closed-loop speed-control strategy combines an inverse-mapping feedforward controller based on an average model derived for periodic actuation of angle-of-attack and a proportional-integral feedback controller utilizing the estimated flow information. Simulation and experimental results are presented to show the effectiveness of the estimation and control strategy. The paper provides a systematic approach to distributed flow sensing for closed-loop speed control of a flexible fish robot by regulating the flapping amplitude.

  10. Distributed flow sensing for closed-loop speed control of a flexible fish robot.

    PubMed

    Zhang, Feitian; Lagor, Francis D; Yeo, Derrick; Washington, Patrick; Paley, Derek A

    2015-12-01

    Flexibility plays an important role in fish behavior by enabling high maneuverability for predator avoidance and swimming in turbulent flow. This paper presents a novel flexible fish robot equipped with distributed pressure sensors for flow sensing. The body of the robot is molded from soft, hyperelastic material, which provides flexibility. Its Joukowski-foil shape is conducive to modeling the fluid analytically. A quasi-steady potential-flow model is adopted for real-time flow estimation, whereas a discrete-time vortex-shedding flow model is used for higher-fidelity simulation. The dynamics for the flexible fish robot yield a reduced model for one-dimensional swimming. A recursive Bayesian filter assimilates pressure measurements to estimate flow speed, angle of attack, and foil camber. The closed-loop speed-control strategy combines an inverse-mapping feedforward controller based on an average model derived for periodic actuation of angle-of-attack and a proportional-integral feedback controller utilizing the estimated flow information. Simulation and experimental results are presented to show the effectiveness of the estimation and control strategy. The paper provides a systematic approach to distributed flow sensing for closed-loop speed control of a flexible fish robot by regulating the flapping amplitude. PMID:26495855

  11. High-speed 3-dimensional imaging in robot-assisted thoracic surgical procedures.

    PubMed

    Kajiwara, Naohiro; Akata, Soichi; Hagiwara, Masaru; Yoshida, Koichi; Kato, Yasufumi; Kakihana, Masatoshi; Ohira, Tatsuo; Kawate, Norihiko; Ikeda, Norihiko

    2014-06-01

    We used a high-speed 3-dimensional (3D) image analysis system (SYNAPSE VINCENT, Fujifilm Corp, Tokyo, Japan) to determine the best positioning of robotic arms and instruments preoperatively. The da Vinci S (Intuitive Surgical Inc, Sunnyvale, CA) was easily set up accurately and rapidly for this operation. Preoperative simulation and intraoperative navigation using the SYNAPSE VINCENT for robot-assisted thoracic operations enabled efficient planning of the operation settings. The SYNAPSE VINCENT can detect the tumor location and depict surrounding tissues quickly, accurately, and safely. This system is also excellent for navigational and educational use. PMID:24882302

  12. High-speed 3-dimensional imaging in robot-assisted thoracic surgical procedures.

    PubMed

    Kajiwara, Naohiro; Akata, Soichi; Hagiwara, Masaru; Yoshida, Koichi; Kato, Yasufumi; Kakihana, Masatoshi; Ohira, Tatsuo; Kawate, Norihiko; Ikeda, Norihiko

    2014-06-01

    We used a high-speed 3-dimensional (3D) image analysis system (SYNAPSE VINCENT, Fujifilm Corp, Tokyo, Japan) to determine the best positioning of robotic arms and instruments preoperatively. The da Vinci S (Intuitive Surgical Inc, Sunnyvale, CA) was easily set up accurately and rapidly for this operation. Preoperative simulation and intraoperative navigation using the SYNAPSE VINCENT for robot-assisted thoracic operations enabled efficient planning of the operation settings. The SYNAPSE VINCENT can detect the tumor location and depict surrounding tissues quickly, accurately, and safely. This system is also excellent for navigational and educational use.

  13. Studying the dynamics of high-speed elastic kinematically controlled robot-manipulator

    NASA Astrophysics Data System (ADS)

    Zavrashina, T. V.; Zavrashina, N. M.

    The authors set out the problem on controlling kinematically spatial motions of a flexible multi-link space robot-manipulator under conditions of its high-speed manoevering. The constructed mathematical model of the system dynamics takes into account the distributed properties of elasticity and inertia of the manipulator links, which are at the state of compound motion. They give an example of the numerical investigation of the dynamical characteristics of two-link robot when it is employed to carry a load.

  14. Passive mapping and intermittent exploration for mobile robots

    NASA Technical Reports Server (NTRS)

    Engleson, Sean P.

    1994-01-01

    An adaptive state space architecture is combined with diktiometric representation to provide the framework for designing a robot mapping system with flexible navigation planning tasks. This involves indexing waypoints described as expectations, geometric indexing, and perceptual indexing. Matching and updating the robot's projected position and sensory inputs with indexing waypoints involves matchers, dynamic priorities, transients, and waypoint restructuring. The robot's map learning can be opganized around the principles of passive mapping.

  15. A mobile robot system for ground servicing operations on the space shuttle

    NASA Technical Reports Server (NTRS)

    Dowling, K.; Bennett, R.; Blackwell, M.; Graham, T.; Gatrall, S.; O'Toole, R.; Schempf, H.

    1992-01-01

    A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.

  16. A Mobile Robot Localization via Indoor Fixed Remote Surveillance Cameras.

    PubMed

    Shim, Jae Hong; Cho, Young Im

    2016-02-04

    Localization, which is a technique required by service robots to operate indoors, has been studied in various ways. Most localization techniques have the robot measure environmental information to obtain location information; however, this is a high-cost option because it uses extensive equipment and complicates robot development. If an external device is used to determine a robot's location and transmit this information to the robot, the cost of internal equipment required for location recognition can be reduced. This will simplify robot development. Thus, this study presents an effective method to control robots by obtaining their location information using a map constructed by visual information from surveillance cameras installed indoors. With only a single image of an object, it is difficult to gauge its size due to occlusion. Therefore, we propose a localization method using several neighboring surveillance cameras. A two-dimensional map containing robot and object position information is constructed using images of the cameras. The concept of this technique is based on modeling the four edges of the projected image of the field of coverage of the camera and an image processing algorithm of the finding object's center for enhancing the location estimation of objects of interest. We experimentally demonstrate the effectiveness of the proposed method by analyzing the resulting movement of a robot in response to the location information obtained from the two-dimensional map. The accuracy of the multi-camera setup was measured in advance.

  17. A Mobile Robot Localization via Indoor Fixed Remote Surveillance Cameras.

    PubMed

    Shim, Jae Hong; Cho, Young Im

    2016-01-01

    Localization, which is a technique required by service robots to operate indoors, has been studied in various ways. Most localization techniques have the robot measure environmental information to obtain location information; however, this is a high-cost option because it uses extensive equipment and complicates robot development. If an external device is used to determine a robot's location and transmit this information to the robot, the cost of internal equipment required for location recognition can be reduced. This will simplify robot development. Thus, this study presents an effective method to control robots by obtaining their location information using a map constructed by visual information from surveillance cameras installed indoors. With only a single image of an object, it is difficult to gauge its size due to occlusion. Therefore, we propose a localization method using several neighboring surveillance cameras. A two-dimensional map containing robot and object position information is constructed using images of the cameras. The concept of this technique is based on modeling the four edges of the projected image of the field of coverage of the camera and an image processing algorithm of the finding object's center for enhancing the location estimation of objects of interest. We experimentally demonstrate the effectiveness of the proposed method by analyzing the resulting movement of a robot in response to the location information obtained from the two-dimensional map. The accuracy of the multi-camera setup was measured in advance. PMID:26861325

  18. Introduction to autonomous mobile robotics using Lego Mindstorms NXT

    NASA Astrophysics Data System (ADS)

    Akın, H. Levent; Meriçli, Çetin; Meriçli, Tekin

    2013-12-01

    Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the Lego Mindstorms NXT kits are used as the robot platform. The aims, scope and contents of the course are presented, and the design of the laboratory sessions as well as the term projects, which address several core problems of robotics and artificial intelligence simultaneously, are explained in detail.

  19. A satellite orbital testbed for SATCOM using mobile robots

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Lu, Wenjie; Wang, Zhonghai; Jia, Bin; Wang, Gang; Wang, Tao; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2016-05-01

    This paper develops and evaluates a satellite orbital testbed (SOT) for satellite communications (SATCOM). SOT can emulate the 3D satellite orbit using the omni-wheeled robots and a robotic arm. The 3D motion of satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The former actions are emulated by omni-wheeled robots while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. The emulated satellite positions will go to the measure model, whose results will be used to perform multiple space object tracking. Then the tracking results will go to the maneuver detection and collision alert. The satellite maneuver commands will be translated to robots commands and robotic arm commands. In SATCOM, the effects of jamming depend on the range and angles of the positions of satellite transponder relative to the jamming satellite. We extend the SOT to include USRP transceivers. In the extended SOT, the relative ranges and angles are implemented using omni-wheeled robots and robotic arms.

  20. Controlling Flexible Robot Arms Using High Speed Dynamics Process

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan (Inventor)

    1996-01-01

    A robot manipulator controller for a flexible manipulator arm having plural bodies connected at respective movable hinges and flexible in plural deformation modes corresponding to respective modal spatial influence vectors relating deformations of plural spaced nodes of respective bodies to the plural deformation modes, operates by computing articulated body quantities for each of the bodies from respective modal spatial influence vectors, obtaining specified body forces for each of the bodies, and computing modal deformation accelerations of the nodes and hinge accelerations of the hinges from the specified body forces, from the articulated body quantities and from the modal spatial influence vectors. In one embodiment of the invention, the controller further operates by comparing the accelerations thus computed to desired manipulator motion to determine a motion discrepancy, and correcting the specified body forces so as to reduce the motion discrepancy. The manipulator bodies and hinges are characterized by respective vectors of deformation and hinge configuration variables, and computing modal deformation accelerations and hinge accelerations is carried out for each one of the bodies beginning with the outermost body by computing a residual body force from a residual body force of a previous body and from the vector of deformation and hinge configuration variables, computing a resultant hinge acceleration from the body force, the residual body force and the articulated hinge inertia, and revising the residual body force modal body acceleration.

  1. Controlling flexible robot arms using a high speed dynamics process

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan (Inventor); Rodriguez, Guillermo (Inventor)

    1992-01-01

    Described here is a robot controller for a flexible manipulator arm having plural bodies connected at respective movable hinges, and flexible in plural deformation modes. It is operated by computing articulated body qualities for each of the bodies from the respective modal spatial influence vectors, obtaining specified body forces for each of the bodies, and computing modal deformation accelerations of the nodes and hinge accelerations of the hinges from the specified body forces, from the articulated body quantities and from the modal spatial influence vectors. In one embodiment of the invention, the controller further operates by comparing the accelerations thus computed to desired manipulator motion to determine a motion discrepancy, and correcting the specified body forces so as to reduce the motion discrepancy. The manipulator bodies and hinges are characterized by respective vectors of deformation and hinge configuration variables. Computing modal deformation accelerations and hinge accelerations is carried out for each of the bodies, beginning with the outermost body by computing a residual body force from a residual body force of a previous body, computing a resultant hinge acceleration from the body force, and then, for each one of the bodies beginning with the innermost body, computing a modal body acceleration from a modal body acceleration of a previous body, computing a modal deformation acceleration and hinge acceleration from the resulting hinge acceleration and from the modal body acceleration.

  2. Remote control of mobile robots through human eye gaze: the design and evaluation of an interface

    NASA Astrophysics Data System (ADS)

    Latif, Hemin Omer; Sherkat, Nasser; Lotfi, Ahmad

    2008-10-01

    Controlling mobile robots remotely requires the operator to monitor the status of the robot through some sort of feedback. Assuming a vision based feedback system is used the operator is required to closely monitor the images while navigating the robot in real time. This will engage the eyes and the hands of the operator. Since the eyes are engaged in the monitoring task anyway, their gaze can be used to navigate the robot in order to free the hands of the operator. However, the challenge here lies in developing an interaction interface that enables an intuitive distinction to be made between monitoring and commanding. This paper presents a novel means of constructing a user interface to meet this challenge. A range of solutions are constructed by augmenting the visual feedback with command regions to investigate the extent to which a user can intuitively control the robot. An experimental platform comprising a mobile robot together with cameras and eye-gaze system is constructed. The design of the system allows control of the robot, control of onboard cameras and control of the interface through eye-gaze. A number of tasks are designed to evaluate the proposed solutions. This paper presents the design considerations and the results of the evaluation. Overall it is found that the proposed solutions provide effective means of successfully navigating the robot for a range of tasks.

  3. ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots

    SciTech Connect

    Parker, L.E.

    1995-02-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.

  4. Remote wave measurements using autonomous mobile robotic systems

    NASA Astrophysics Data System (ADS)

    Kurkin, Andrey; Zeziulin, Denis; Makarov, Vladimir; Belyakov, Vladimir; Tyugin, Dmitry; Pelinovsky, Efim

    2016-04-01

    The project covers the development of a technology for monitoring and forecasting the state of the coastal zone environment using radar equipment transported by autonomous mobile robotic systems (AMRS). Sought-after areas of application are the eastern and northern coasts of Russia, where continuous collection of information on topographic changes of the coastal zone and carrying out hydrodynamic measurements in inaccessible to human environment are needed. The intensity of the reflection of waves, received by radar surveillance, is directly related to the height of the waves. Mathematical models and algorithms for processing experimental data (signal selection, spectral analysis, wavelet analysis), recalculation of landwash from data on heights of waves far from the shore, determination of the threshold values of heights of waves far from the shore have been developed. There has been developed the program complex for functioning of the experimental prototype of AMRS, comprising the following modules: data loading module, reporting module, module of georeferencing, data analysis module, monitoring module, hardware control module, graphical user interface. Further work will be connected with carrying out tests of manufactured experimental prototype in conditions of selected routes coastline of Sakhalin Island. Conducting field tests will allow to reveal the shortcomings of development and to identify ways of optimization of the structure and functioning algorithms of AMRS, as well as functioning the measuring equipment. The presented results have been obtained in Nizhny Novgorod State Technical University n.a. R. Alekseev in the framework of the Federal Target Program «Research and development on priority directions of scientific-technological complex of Russia for 2014 - 2020 years» (agreement № 14.574.21.0089 (unique identifier of agreement - RFMEFI57414X0089)).

  5. Introduction to Autonomous Mobile Robotics Using "Lego Mindstorms" NXT

    ERIC Educational Resources Information Center

    Akin, H. Levent; Meriçli, Çetin; Meriçli, Tekin

    2013-01-01

    Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the…

  6. Evolving mobile robots able to display collective behaviors.

    PubMed

    Baldassarre, Gianluca; Nolfi, Stefano; Parisi, Domenico

    2003-01-01

    We present a set of experiments in which simulated robots are evolved for the ability to aggregate and move together toward a light target. By developing and using quantitative indexes that capture the structural properties of the emerged formations, we show that evolved individuals display interesting behavioral patterns in which groups of robots act as a single unit. Moreover, evolved groups of robots with identical controllers display primitive forms of situated specialization and play different behavioral functions within the group according to the circumstances. Overall, the results presented in the article demonstrate that evolutionary techniques, by exploiting the self-organizing behavioral properties that emerge from the interactions between the robots and between the robots and the environment, are a powerful method for synthesizing collective behavior. PMID:14556687

  7. Mobile robot based electrostatic spray system for controlling pests on cotton plants in Iraq

    NASA Astrophysics Data System (ADS)

    Al-Mamury, M.; Manivannan, N.; Al-Raweshidy, H.; Balachandran, W.

    2015-10-01

    A mobile robot based electrostatic spray system was developed to combat pest infestation on cotton plants in Iraq. The system consists of a charged spray nozzle, a CCD camera, a mobile robot (vehicle and arm) and Arduino microcontroller. Arduino microcontroller is used to control the spray nozzle and the robot. Matlab is used to process the image from the CCD camera and to generate the appropriate control signals to the robot and the spray nozzle. COMSOL multi-physics FEM software was used to design the induction electrodes to achieve maximum charge transfer onto the fan spray liquid film resulting in achieving the desired charge/mass ratio of the spray. The charged spray nozzle was operated on short duration pulsed spray mode. Image analysis was employed to investigate the spray deposition on improvised insect targets on an artificial plant.

  8. A maintenance scheme of communication link in mobile robot ad hoc networks based on potential field

    NASA Astrophysics Data System (ADS)

    Jiang, Hong; Jin, WenPing; Yang, GyoYing; Li, LeiMin

    2007-12-01

    Maintaining communication link in mobile robot networks between task robots and a control center is very important in some urgent application occasions such as remote danger detections. To offer a reliable multi-hop communication link, a link maintaining scheme based on artificial potential field is presented. The scheme is achieved by a task robot and communication relay ones. The task robot performs predefined tasks, and relay ones are simple robots which form a communication relay chain. When robots move towards destination in formation, a kind of attractive force created by communication quality is added to traditional potential field, and relay robots follow the task robot and automatically stop at adequate locations to form a relay chain from the control station to the task robot. In order to increase relay usage efficiency, when some relays are replaced by other short cut relays, the redundant relays can be reused by initiating another moving toward specified location. Simulation results show that the scheme can provide a reliable multi-hop communication link, and that the communication connection can be obtained through minimal number of relays.

  9. Motion generation of peristaltic mobile robot with particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Homma, Takahiro; Kamamichi, Norihiro

    2015-03-01

    In developments of robots, bio-mimetics is attracting attention, which is a technology for the design of the structure and function inspired from biological system. There are a lot of examples of bio-mimetics in robotics such as legged robots, flapping robots, insect-type robots, fish-type robots. In this study, we focus on the motion of earthworm and aim to develop a peristaltic mobile robot. The earthworm is a slender animal moving in soil. It has a segmented body, and each segment can be shorted and lengthened by muscular actions. It can move forward by traveling expanding motions of each segment backward. By mimicking the structure and motion of the earthworm, we can construct a robot with high locomotive performance against an irregular ground or a narrow space. In this paper, to investigate the motion analytically, a dynamical model is introduced, which consist of a series-connected multi-mass model. Simple periodic patterns which mimic the motions of earthworms are applied in an open-loop fashion, and the moving patterns are verified through numerical simulations. Furthermore, to generate efficient motion of the robot, a particle swarm optimization algorithm, one of the meta-heuristic optimization, is applied. The optimized results are investigated by comparing to simple periodic patterns.

  10. The Longitudinal Impact of Cognitive Speed of Processing Training on Driving Mobility

    ERIC Educational Resources Information Center

    Edwards, Jerri D.; Myers, Charlsie; Ross, Lesley A.; Roenker, Daniel L.; Cissell, Gayla M.; McLaughlin, Alexis M.; Ball, Karlene K.

    2009-01-01

    Purpose: To examine how cognitive speed of processing training affects driving mobility across a 3-year period among older drivers. Design and Methods: Older drivers with poor Useful Field of View (UFOV) test performance (indicating greater risk for subsequent at-fault crashes and mobility declines) were randomly assigned to either a speed of…

  11. Evaluation of unmanned airborne vehicles and mobile robotic telesurgery in an extreme environment.

    PubMed

    Harnett, Brett M; Doarn, Charles R; Rosen, Jacob; Hannaford, Blake; Broderick, Timothy J

    2008-08-01

    As unmanned extraction vehicles become a reality in the military theater, opportunities to augment medical operations with telesurgical robotics become more plausible. This project demonstrated an experimental surgical robot using an unmanned airborne vehicle (UAV) as a network topology. Because battlefield operations are dynamic and geographically challenging, the installation of wireless networks is not a feasible option at this point. However, to utilize telesurgical robotics to assist in the urgent medical care of wounded soldiers, a robust, high bandwidth, low latency network is requisite. For the first time, a mobile surgical robotic system was deployed to an austere environment and surgeons were able to remotely operate the systems wirelessly using a UAV. Two University of Cincinnati surgeons were able to remotely drive the University of Washington's RAVEN robot's end effectors. The network topology demonstrated a highly portable, quickly deployable, bandwidth-sufficient and low latency wireless network required for battlefield use.

  12. Event-Based Control Strategy for Mobile Robots in Wireless Environments.

    PubMed

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-01-01

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy. PMID:26633412

  13. Event-Based Control Strategy for Mobile Robots in Wireless Environments.

    PubMed

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-12-02

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy.

  14. Event-Based Control Strategy for Mobile Robots in Wireless Environments

    PubMed Central

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-01-01

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy. PMID:26633412

  15. Learning robust plans for mobile robots from a single trial

    SciTech Connect

    Engelson, S.P.

    1996-12-31

    We address the problem of learning robust plans for robot navigation by observing particular robot behaviors. In this Paper we present a method which can learn a robust reactive example of a desired behavior. The translating a sequence of events arising system into a plan which represents among such events. This method allows us to rely or the underlying stability properties of low-level behavior processes in order to produce robust plans. Since the resultant plan reproduces the original behavior of the robot at a high level, it generalizes over small environmental changes and is robust to sensor and effector noise.

  16. A remote lab for experiments with a team of mobile robots.

    PubMed

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-01-01

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab. PMID:25192316

  17. A Remote Lab for Experiments with a Team of Mobile Robots

    PubMed Central

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-01-01

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab. PMID:25192316

  18. A remote lab for experiments with a team of mobile robots.

    PubMed

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-09-04

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab.

  19. Ambient Intelligence Application Based on Environmental Measurements Performed with an Assistant Mobile Robot

    PubMed Central

    Martinez, Dani; Teixidó, Mercè; Font, Davinia; Moreno, Javier; Tresanchez, Marcel; Marco, Santiago; Palacín, Jordi

    2014-01-01

    This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile. PMID:24681671

  20. Ambient intelligence application based on environmental measurements performed with an assistant mobile robot.

    PubMed

    Martinez, Dani; Teixidó, Mercè; Font, Davinia; Moreno, Javier; Tresanchez, Marcel; Marco, Santiago; Palacín, Jordi

    2014-01-01

    This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile. PMID:24681671

  1. Ambient intelligence application based on environmental measurements performed with an assistant mobile robot.

    PubMed

    Martinez, Dani; Teixidó, Mercè; Font, Davinia; Moreno, Javier; Tresanchez, Marcel; Marco, Santiago; Palacín, Jordi

    2014-03-27

    This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile.

  2. Autonomous navigation of a mobile robot using custom-designed qualitative reasoning VLSI chips and boards

    SciTech Connect

    Pin, F.G.; Pattay, R.S. ); Watanabe, H.; Symon, J. . Dept. of Computer Science)

    1991-01-01

    Two types of computer boards including custom-designed VLSI chips have been developed to add a qualitative reasoning capability to the real-time control of autonomous mobile robots. The design and operation of these boards are first described and an example of their use for the autonomous navigation of a mobile robot is presented. The development of qualitative reasoning schemes emulating human-like navigation is a-priori unknown environments is discussed. The efficiency of such schemes, which can consist of as little as a dozen qualitative rules, is illustrated in experiments involving an autonomous mobile robot navigating on the basis of very sparse inaccurate sensor data. 17 refs., 6 figs.

  3. Using custom-designed VLSI fuzzy inferencing chips for the autonomous navigation of a mobile robot

    SciTech Connect

    Pin, F.G.; Pattay, R.S. ); Watanabe, Hiroyuki; Symon, J. . Dept. of Computer Science)

    1991-01-01

    Two types of computer boards including custom-designed VLSI fuzzy inferencing chips have been developed to add a qualitative reasoning capability to the real-time control of autonomous mobile robots. The design and operation of these boards are first described and an example of their use for the autonomous navigation of mobile robot is presented. The development of qualitative reasoning schemes emulating human-like navigation in apriori unknown environments is discussed. An approach using superposition of elemental sensor-based behaviors is shown to alloy easy development and testing of the inferencing rule base, while providing for progressive addition of behaviors to resolve situations of increasing complexity. The efficiency of such schemes, which can consist of as little as a dozen qualitative rules, is illustrated in experiments involving an autonomous mobile robot navigating on the basis of very sparse and inaccurate sensor data. 17 refs., 6 figs.

  4. Design and development of proprioceptive sensors to be used for mobile robot localization

    NASA Astrophysics Data System (ADS)

    Ferrand, Andre

    Proprioceptive sensors designed for use in the localization functions of mobile robots are described. The robots for which the sensors are designed may be required to climb small obstacles or small inclined planes. The sensor system is designed to provide five localization parameters, namely three attitude angles, yaw, roll and pitch, and two parameters of position on a horizontal plane. For this purpose, an odometer with inertial sensors is used along with a gyroscope and accelerometers. The odometer works independently of the robot locomotion. The gyroscope controls the odometer's measurements during the climbing of an obstacle and, when necessary, rectifies them.

  5. Development and Control of Multi-Degree-of-Freedom Mobile Robot for Acquisition of Road Environmental Modes

    NASA Astrophysics Data System (ADS)

    Murata, Naoya; Katsura, Seiichiro

    Acquisition of information about the environment around a mobile robot is important for purposes such as controlling the robot from a remote location and in situations such as that when the robot is running autonomously. In many researches, audiovisual information is used. However, acquisition of information about force sensation, which is included in environmental information, has not been well researched. The mobile-hapto, which is a remote control system with force information, has been proposed, but the robot used for the system can acquire only the horizontal component of forces. For this reason, in this research, a three-wheeled mobile robot that consists of seven actuators was developed and its control system was constructed. It can get information on horizontal and vertical forces without using force sensors. By using this robot, detailed information on the forces in the environment can be acquired and the operability of the robot and its capability to adjust to the environment are expected to improve.

  6. Multisensor-based human detection and tracking for mobile service robots.

    PubMed

    Bellotto, Nicola; Hu, Huosheng

    2009-02-01

    One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments. PMID:19068442

  7. Multisensor-based human detection and tracking for mobile service robots.

    PubMed

    Bellotto, Nicola; Hu, Huosheng

    2009-02-01

    One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.

  8. Soil sampling sensor system on a mobile robot

    NASA Astrophysics Data System (ADS)

    Cao, Peter M.; Hall, Ernest L.; Zhang, Evan

    2003-10-01

    Determining if a segment of property is suitable for use as an aircraft is a vitally important task that is currently performed by humans. However, this task can also put our people in harms way from land mines, sniper and artillery attacks. The objective of this research is to build a soil survey manipulator that can be carried by a lightweight, portable, autonomous vehicle, sensors and controls to navigate in assault zone. The manipulators permit both surface and sub surface measurements. An original soil sampling tube was constructed with linear actuator as manipulator and standard penetrometer as sampling sensor. The controls provide local control of the robot as well as the soil sampling mechanism. GPS has been selected to perform robot global navigation. The robot was constructed and tested on the test field. The results verified the concepts of using soil sampling robot to survey runway is feasible.

  9. Concept formation and generalization based on experimentation by an autonomous mobile robot

    SciTech Connect

    Spelt, P.F.; deSaussure, G.; Lyness, E.; Oliver, G.; Silliman, M.

    1989-01-01

    The Center for Engineering Systems Advanced Research (CESAR) conducts basic research in the area of intelligent machines. In this paper, we describe our approach to a class of machine learning problems which involves autonomous concept formation using feedback from trial-and-error learning. Our formulation was experimentally validated on an autonomous mobile robot, which learned the task of control panel monitoring and manipulation for effective process control. Conclusions are drawn concerning the applicability of the system to a more general class of learning problems, and implications for the use of autonomous mobile robots in hostile and unknown environments are discussed. 9 refs., 5 figs.

  10. Unknown-terrains navigation of a mobile robot using an array of sonars

    SciTech Connect

    Rao, N.S.V.

    1994-09-01

    A mobile robot equipped with an array of sonars is required to navigate to a destination through a planar terrain populated by polygonal obstacles whose locations and shapes are unknown. A navigation method is proposed based on a trapezoidal decomposition of tile terrain for an abstract formulation, where elementary navigational steps consist of following the obstacle edges and turning around the corners. The convergence of an abstract version of the algorithm is first analytically established. Then experimental results on implementing the algorithm on an experimental mobile robot are reported.

  11. Universal adaptive λ tracker for nonholonomic wheeled mobile robots moving on a plane in three-dimensional space

    NASA Astrophysics Data System (ADS)

    Mazur, Alicja

    1999-01-01

    This paper presents a universal adaptive (lambda) -tracking control algorithm for wheeled mobile robots moving on the plane in the 3D space. The introduce control algorithm maybe considered as a dynamic version of the PD controller requiring only a knowledge of the robot kinematics. This controller preserves the convergence of the position tracking error of mobile robots to the ball of radius (lambda) > 0, where (lambda) is arbitrary but prespecified. Theoretical considerations are illustrated with simulations.

  12. Laser-based pedestrian tracking in outdoor environments by multiple mobile robots.

    PubMed

    Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko

    2012-01-01

    This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. PMID:23202171

  13. Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots

    PubMed Central

    Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko

    2012-01-01

    This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. PMID:23202171

  14. Robotic automation for space: planetary surface exploration, terrain-adaptive mobility, and multirobot cooperative tasks

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Huntsberger, Terrance L.; Pirjanian, Paolo; Baumgartner, Eric T.; Aghazarian, Hrand; Trebi-Ollennu, Ashitey; Leger, Patrick C.; Cheng, Yang; Backes, Paul G.; Tunstel, Edward; Dubowsky, Steven; Iagnemma, Karl D.; McKee, Gerard T.

    2001-10-01

    During the last decade, there has been significant progress toward a supervised autonomous robotic capability for remotely controlled scientific exploration of planetary surfaces. While planetary exploration potentially encompasses many elements ranging from orbital remote sensing to subsurface drilling, the surface robotics element is particularly important to advancing in situ science objectives. Surface activities include a direct characterization of geology, mineralogy, atmosphere and other descriptors of current and historical planetary processes-and ultimately-the return of pristine samples to Earth for detailed analysis. Toward these ends, we have conducted a broad program of research on robotic systems for scientific exploration of the Mars surface, with minimal remote intervention. The goal is to enable high productivity semi-autonomous science operations where available mission time is concentrated on robotic operations, rather than up-and-down-link delays. Results of our work include prototypes for landed manipulators, long-ranging science rovers, sampling/sample return mobility systems, and more recently, terrain-adaptive reconfigurable/modular robots and closely cooperating multiple rover systems. The last of these are intended to facilitate deployment of planetary robotic outposts for an eventual human-robot sustained scientific presence. We overview our progress in these related areas of planetary robotics R&D, spanning 1995-to-present.

  15. Symmetric caging formation for convex polygonal object transportation by multiple mobile robots based on fuzzy sliding mode control.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2016-01-01

    In this paper, the problem of object caging and transporting is considered for multiple mobile robots. With the consideration of minimizing the number of robots and decreasing the rotation of the object, the proper points are calculated and assigned to the multiple mobile robots to allow them to form a symmetric caging formation. The caging formation guarantees that all of the Euclidean distances between any two adjacent robots are smaller than the minimal width of the polygonal object so that the object cannot escape. In order to avoid collision among robots, the parameter of the robots radius is utilized to design the caging formation, and the A⁎ algorithm is used so that mobile robots can move to the proper points. In order to avoid obstacles, the robots and the object are regarded as a rigid body to apply artificial potential field method. The fuzzy sliding mode control method is applied for tracking control of the nonholonomic mobile robots. Finally, the simulation and experimental results show that multiple mobile robots are able to cage and transport the polygonal object to the goal position, avoiding obstacles.

  16. Quantifying Traversability of Terrain for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Seraji, Homayoun; Werger, Barry

    2005-01-01

    A document presents an updated discussion on a method of autonomous navigation for a robotic vehicle navigating across rough terrain. The method involves, among other things, the use of a measure of traversability, denoted the fuzzy traversability index, which embodies the information about the slope and roughness of terrain obtained from analysis of images acquired by cameras mounted on the robot. The improvements presented in the report focus on the use of the fuzzy traversability index to generate a traversability map and a grid map for planning the safest path for the robot. Once grid traversability values have been computed, they are utilized for rejecting unsafe path segments and for computing a traversalcost function for ranking candidate paths, selected by a search algorithm, from a specified initial position to a specified final position. The output of the algorithm is a set of waypoints designating a path having a minimal-traversal cost.

  17. Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.

    PubMed

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-10-21

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.

  18. Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.

    PubMed

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-01-01

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933

  19. Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots

    PubMed Central

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-01-01

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933

  20. Electroencephalography (EEG) Based Control in Assistive Mobile Robots: A Review

    NASA Astrophysics Data System (ADS)

    Krishnan, N. Murali; Mariappan, Muralindran; Muthukaruppan, Karthigayan; Hijazi, Mohd Hanafi Ahmad; Kitt, Wong Wei

    2016-03-01

    Recently, EEG based control in assistive robot usage has been gradually increasing in the area of biomedical field for giving quality and stress free life for disabled and elderly people. This study reviews the deployment of EGG based control in assistive robots, especially for those who in need and neurologically disabled. The main objective of this paper is to describe the methods used for (i) EEG data acquisition and signal preprocessing, (ii) feature extraction and (iii) signal classification methods. Besides that, this study presents the specific research challenges in the designing of these control systems and future research directions.

  1. Speed-dependent reference joint trajectory generation for robotic gait support.

    PubMed

    Koopman, B; van Asseldonk, E H F; van der Kooij, H

    2014-04-11

    For the control of actuated orthoses, or gait rehabilitation robotics, kinematic reference trajectories are often required. These trajectories, consisting of joint angles, angular velocities and accelerations, are highly dependent on walking-speed. We present and evaluate a novel method to reconstruct body-height and speed-dependent joint trajectories. First, we collected gait kinematics in fifteen healthy (middle) aged subjects (47-68), at a wide range of walking-speeds (0.5-5 kph). For each joint trajectory multiple key-events were selected (among which its extremes). Second, we derived regression-models that predict the timing, angle, angular velocity and acceleration for each key-event, based on walking-speed and the subject׳s body-height. Finally, quintic splines were fitted between the predicted key-events to reconstruct a full gait cycle. Regression-models were obtained for hip ab-/adduction, hip flexion/extension, knee flexion/extension and ankle plantar-/dorsiflexion. Results showed that the majority of the key-events were dependent on walking-speed, both in terms of timing and amplitude, whereas the body-height had less effect. The reconstructed trajectories matched the measured trajectories very well, in terms of angle, angular velocity and acceleration. For the angles the RMSE between the reconstructed and measured trajectories was 2.6°. The mean correlation coefficient between the reconstructed and measured angular trajectories was 0.91. The method and the data presented in this paper can be used to generate speed-dependent gait patterns. These patterns can be used for the control of several robotic gait applications. Alternatively they can assist the assessment of pathological gait, where they can serve as a reference for "normal" gait.

  2. High Speed Mobility Through On-Demand Aviation

    NASA Technical Reports Server (NTRS)

    Moore, Mark D.; Goodrich, Ken; Viken, Jeff; Smith, Jeremy; Fredericks, Bill; Trani, Toni; Barraclough, Jonathan; German, Brian; Patterson, Michael

    2013-01-01

    automobiles. ?? Community Noise: Hub and smaller GA airports are facing increasing noise restrictions, and while commercial airliners have dramatically decreased their community noise footprint over the past 30 years, GA aircraft noise has essentially remained same, and moreover, is located in closer proximity to neighborhoods and businesses. ?? Operating Costs: GA operating costs have risen dramatically due to average fuel costs of over $6 per gallon, which has constrained the market over the past decade and resulted in more than 50% lower sales and 35% less yearly operations. Infusion of autonomy and electric propulsion technologies can accomplish not only a transformation of the GA market, but also provide a technology enablement bridge for both larger aircraft and the emerging civil Unmanned Aerial Systems (UAS) markets. The NASA Advanced General Aviation Transport Experiments (AGATE) project successfully used a similar approach to enable the introduction of primary composite structures and flat panel displays in the 1990s, establishing both the technology and certification standardization to permit quick adoption through partnerships with industry, academia, and the Federal Aviation Administration (FAA). Regional and airliner markets are experiencing constant pressure to achieve decreasing levels of community emissions and noise, while lowering operating costs and improving safety. But to what degree can these new technology frontiers impact aircraft safety, the environment, operations, cost, and performance? Are the benefits transformational enough to fundamentally alter aircraft competiveness and productivity to permit much greater aviation use for high speed and On-Demand Mobility (ODM)? These questions were asked in a Zip aviation system study named after the Zip Car, an emerging car-sharing business model. Zip Aviation investigates the potential to enable new emergent markets for aviation that offer "more flexibility than the existing transportation solutions

  3. Insect-inspired high-speed motion vision system for robot control.

    PubMed

    Wu, Haiyan; Zou, Ke; Zhang, Tianguang; Borst, Alexander; Kühnlenz, Kolja

    2012-10-01

    The mechanism for motion detection in a fly's vision system, known as the Reichardt correlator, suffers from a main shortcoming as a velocity estimator: low accuracy. To enable accurate velocity estimation, responses of the Reichardt correlator to image sequences are analyzed in this paper. An elaborated model with additional preprocessing modules is proposed. The relative error of velocity estimation is significantly reduced by establishing a real-time response-velocity lookup table based on the power spectrum analysis of the input signal. By exploiting the improved velocity estimation accuracy and the simple structure of the Reichardt correlator, a high-speed vision system of 1 kHz is designed and applied for robot yaw-angle control in real-time experiments. The experimental results demonstrate the potential and feasibility of applying insect-inspired motion detection to robot control.

  4. Optimal motion planning for collision avoidance of mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An optimal control formulation of the problem of collision avoidance of mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the minimum distance between the robot and the object is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. A perturbation control type of approach is used to update the optimal plan. Simulation results verify the value of the proposed strategy.

  5. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm. PMID:25265622

  6. Autonomous discovery and learning by a mobile robot in unstructured environments

    SciTech Connect

    Pin, F.G.; de Saussure, G.; Spelt, P.F.; Barnett, D.L.; Killough, S.M.; Weisbin, C.R.

    1988-01-01

    This paper presents recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of autonomous discovery and learning of emergency and maintenance tasks in unstructured environments by a mobile robot. The methodologies for learning basic operating principles of control devices, and for using the acquired knowledge to solve new problems with conditions not encountered before are presented. The algorithms necessary for the robot to discover problem-solving sequences of actions, through experimentation with the environment, in the two cases of immediate feedback and delayed feedback are described. The inferencing schemes allowing the robot to classify the information acquired from a reduced set of examples and to generalize its knowledge to a much wider problem-solving domain are also provided. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot is then presented. 8 refs., 2 figs.

  7. Robust formation tracking control of mobile robots via one-to-one time-varying communication

    NASA Astrophysics Data System (ADS)

    Dasdemir, Janset; Loría, Antonio

    2014-09-01

    We solve the formation tracking control problem for mobile robots via linear control, under the assumption that each agent communicates only with one 'leader' robot and with one follower, hence forming a spanning-tree topology. We assume that the communication may be interrupted on intervals of time. As in the classical tracking control problem for non-holonomic systems, the swarm is driven by a fictitious robot which moves about freely and which is a leader to one robot only. Our control approach is decentralised and the control laws are linear with time-varying gains; in particular, this accounts for the case when position measurements may be lost over intervals of time. For both velocity-controlled and force-controlled systems, we establish uniform global exponential stability, hence consensus formation tracking, for the error system under a condition of persistency of excitation on the reference angular velocity of the virtual leader and on the control gains.

  8. Learning from adaptive neural network output feedback control of a unicycle-type mobile robot.

    PubMed

    Zeng, Wei; Wang, Qinghui; Liu, Fenglin; Wang, Ying

    2016-03-01

    This paper studies learning from adaptive neural network (NN) output feedback control of nonholonomic unicycle-type mobile robots. The major difficulties are caused by the unknown robot system dynamics and the unmeasurable states. To overcome these difficulties, a new adaptive control scheme is proposed including designing a new adaptive NN output feedback controller and two high-gain observers. It is shown that the stability of the closed-loop robot system and the convergence of tracking errors are guaranteed. The unknown robot system dynamics can be approximated by radial basis function NNs. When repeating same or similar control tasks, the learned knowledge can be recalled and reused to achieve guaranteed stability and better control performance, thereby avoiding the tremendous repeated training process of NNs. PMID:26830003

  9. Learning from adaptive neural network output feedback control of a unicycle-type mobile robot.

    PubMed

    Zeng, Wei; Wang, Qinghui; Liu, Fenglin; Wang, Ying

    2016-03-01

    This paper studies learning from adaptive neural network (NN) output feedback control of nonholonomic unicycle-type mobile robots. The major difficulties are caused by the unknown robot system dynamics and the unmeasurable states. To overcome these difficulties, a new adaptive control scheme is proposed including designing a new adaptive NN output feedback controller and two high-gain observers. It is shown that the stability of the closed-loop robot system and the convergence of tracking errors are guaranteed. The unknown robot system dynamics can be approximated by radial basis function NNs. When repeating same or similar control tasks, the learned knowledge can be recalled and reused to achieve guaranteed stability and better control performance, thereby avoiding the tremendous repeated training process of NNs.

  10. A Mobile Robot for Remote Response to Incidents Involving Hazardous Materials

    NASA Technical Reports Server (NTRS)

    Welch, Richard V.

    1994-01-01

    This paper will describe a teleoperated mobile robot system being developed at JPL for use by the JPL Fire Department/HAZMAT Team. The project, which began in October 1990, is focused on prototyping a robotic vehicle which can be quickly deployed and easily operated by HAZMAT Team personnel allowing remote entry and exploration of a hazardous material incident site. The close involvement of JPL Fire Department personnel has been critical in establishing system requirements as well as evaluating the system. The current robot, called HAZBOT III, has been especially designed for operation in environments that may contain combustible gases. Testing of the system with the Fire Department has shown that teleoperated robots can successfully gain access to incident sites allowing hazardous material spills to be remotely located and identified. Work is continuing to enable more complex missions through enhancement of the operator interface and by allowing tetherless operation.

  11. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  12. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  13. Configuration Control of a Mobile Dextrous Robot: Real-Time Implementation and Experimentation

    NASA Technical Reports Server (NTRS)

    Lim, David; Seraji, Homayoun

    1996-01-01

    This paper describes the design and implementation of a real-time control system with multiple modes of operation for a mobile dexterous manipulator. The manipulator under study is a kinematically redundant seven degree-of-freedom arm from Robotics Research Corporation, mounted on a one degree-of-freedom motorized platform.

  14. Global Output-Feedback Control for Simultaneous Tracking and Stabilization of Wheeled Mobile Robots

    NASA Astrophysics Data System (ADS)

    Chang, J.; Zhang, L. J.; Xue, D.

    A time-varying global output-feedback controller is presented that solves both tracking and stabilization for wheeled mobile robots simultaneously at the torque level. The controller synthesis is based on a coordinate transformation, Lyapunov direct method and backstepping technique. The performance of the proposed controller is demonstrated by simulation.

  15. Bioinspired Intelligent Algorithm and Its Applications for Mobile Robot Control: A Survey

    PubMed Central

    Ni, Jianjun; Wu, Liuying; Fan, Xinnan; Yang, Simon X.

    2016-01-01

    Bioinspired intelligent algorithm (BIA) is a kind of intelligent computing method, which is with a more lifelike biological working mechanism than other types. BIAs have made significant progress in both understanding of the neuroscience and biological systems and applying to various fields. Mobile robot control is one of the main application fields of BIAs which has attracted more and more attention, because mobile robots can be used widely and general artificial intelligent algorithms meet a development bottleneck in this field, such as complex computing and the dependence on high-precision sensors. This paper presents a survey of recent research in BIAs, which focuses on the research in the realization of various BIAs based on different working mechanisms and the applications for mobile robot control, to help in understanding BIAs comprehensively and clearly. The survey has four primary parts: a classification of BIAs from the biomimetic mechanism, a summary of several typical BIAs from different levels, an overview of current applications of BIAs in mobile robot control, and a description of some possible future directions for research. PMID:26819582

  16. Bioinspired Intelligent Algorithm and Its Applications for Mobile Robot Control: A Survey.

    PubMed

    Ni, Jianjun; Wu, Liuying; Fan, Xinnan; Yang, Simon X

    2016-01-01

    Bioinspired intelligent algorithm (BIA) is a kind of intelligent computing method, which is with a more lifelike biological working mechanism than other types. BIAs have made significant progress in both understanding of the neuroscience and biological systems and applying to various fields. Mobile robot control is one of the main application fields of BIAs which has attracted more and more attention, because mobile robots can be used widely and general artificial intelligent algorithms meet a development bottleneck in this field, such as complex computing and the dependence on high-precision sensors. This paper presents a survey of recent research in BIAs, which focuses on the research in the realization of various BIAs based on different working mechanisms and the applications for mobile robot control, to help in understanding BIAs comprehensively and clearly. The survey has four primary parts: a classification of BIAs from the biomimetic mechanism, a summary of several typical BIAs from different levels, an overview of current applications of BIAs in mobile robot control, and a description of some possible future directions for research. PMID:26819582

  17. Using LEGO NXT Mobile Robots with LabVIEW for Undergraduate Courses on Mechatronics

    ERIC Educational Resources Information Center

    Gomez-de-Gabriel, J. M.; Mandow, A.; Fernandez-Lozano, J.; Garcia-Cerezo, A.

    2011-01-01

    The paper proposes lab work and student competitions based on the LEGO NXT Mindstorms kits and standard LabVIEW. The goal of this combination is to stimulate design and experimentation with real hardware and representative software in courses where mobile robotics is adopted as a motivating platform to introduce mechatronics competencies. Basic…

  18. The magic glove: a gesture-based remote controller for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark

    2012-01-01

    This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.

  19. Estimating the absolute position of a mobile robot using position probability grids

    SciTech Connect

    Burgard, W.; Fox, D.; Hennig, D.; Schmidt, T.

    1996-12-31

    In order to re-use existing models of the environment mobile robots must be able to estimate their position and orientation in such models. Most of the existing methods for position estimation are based on special purpose sensors or aim at tracking the robot`s position relative to the known starting point. This paper describes the position probability grid approach to estimating the robot`s absolute position and orientation in a metric model of the environment. Our method is designed to work with standard sensors and is independent of any knowledge about the starting point. It is a Bayesian approach based on certainty grids. In each cell of such a grid we store the probability that this cell refers to the current position of the robot. These probabilities are obtained by integrating the likelihoods of sensor readings over time. Results described in this paper show that our technique is able to reliably estimate the position of a robot in complex environments. Our approach has proven to be robust with respect to inaccurate environmental models, noisy sensors, and ambiguous situations.

  20. Collaboration among a Group of Self-Autonomous Mobile Robots with Diversified Personalities

    NASA Astrophysics Data System (ADS)

    Tauchi, Makiko; Sagawa, Yuji; Tanaka, Toshimitsu; Sugie, Noboru

    Simulation studies were carried out about a group of self-autonomous mobile robots collaborating in collection cleaning-up tasks. The robots are endowed with two kinds of human-like personalities; positivity and tenderness. Dependent on the rank of positivity, decision is made on which one of robots nearby should avoid collision and which one of robots heading for the same small baggage should carry one. As for large baggage which can be carried only by two collaborating robots, tenderness plays an essential role. In the first series of simulation, the initial configuration of 4 robots, 4 small baggage, and 2 large baggage were fixed. The cleaning-up tasks were carried out for all combinations of personalities, 625 cases in total. In the second series, 8 robots performed the task. 5 voluntarily cases were chosen to carry out 100 simulations for each case, by changing the configuration of baggage. From the results of the simulation, it was found that the heterogeneous group performs the task more effectively than the homogeneous group. It seems that diversity in personality is good for survival. In addition to the performance index of task execution time, satisfaction index is introduced to evaluate the degree of satisfaction of the group, too.

  1. Thermal tracking in mobile robots for leak inspection activities.

    PubMed

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-10-09

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.

  2. Thermal Tracking in Mobile Robots for Leak Inspection Activities

    PubMed Central

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-01-01

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684

  3. On autonomous terrain model acquistion by a mobile robot

    NASA Technical Reports Server (NTRS)

    Rao, N. S. V.; Iyengar, S. S.; Weisbin, C. R.

    1987-01-01

    The following problem is considered: A point robot is placed in a terrain populated by an unknown number of polyhedral obstacles of varied sizes and locations in two/three dimensions. The robot is equipped with a sensor capable of detecting all the obstacle vertices and edges that are visible from the present location of the robot. The robot is required to autonomously navigate and build the complete terrain model using the sensor information. It is established that the necessary number of scanning operations needed for complete terrain model acquisition by any algorithm that is based on scan from vertices strategy is given by the summation of i = 1 (sup n) N(O sub i)-n and summation of i = 1 (sup n) N(O sub i)-2n in two- and three-dimensional terrains respectively, where O = (O sub 1, O sub 2,....O sub n) set of the obstacles in the terrain, and N(O sub i) is the number of vertices of the obstacle O sub i.

  4. Thermal tracking in mobile robots for leak inspection activities.

    PubMed

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-01-01

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684

  5. Telerobotic control of a mobile coordinated robotic server, executive summary

    NASA Technical Reports Server (NTRS)

    Lee, Gordon

    1993-01-01

    This interim report continues with the research effort on advanced adaptive controls for space robotics systems. In particular, previous results developed by the principle investigator and his research team centered around fuzzy logic control (FLC) in which the lack of knowledge of the robotic system as well as the uncertainties of the environment are compensated for by a rule base structure which interacts with varying degrees of belief of control action using system measurements. An on-line adaptive algorithm was developed using a single parameter tuning scheme. In the effort presented, the methodology is further developed to include on-line scaling factor tuning and self-learning control as well as extended to the multi-input, multi-output (MIMO) case. Classical fuzzy logic control requires tuning input scale factors off-line through trial and error techniques. This is time-consuming and cannot adapt to new changes in the process. The new adaptive FLC includes a self-tuning scheme for choosing the scaling factors on-line. Further the rule base in classical FLC is usually produced by soliciting knowledge from human operators as to what is good control action for given circumstances. This usually requires full knowledge and experience of the process and operating conditions, which limits applicability. A self-learning scheme is developed which adaptively forms the rule base with very limited knowledge of the process. Finally, a MIMO method is presented employing optimization techniques. This is required for application to space robotics in which several degrees-of-freedom links are commonly used. Simulation examples are presented for terminal control - typical of robotic problems in which a desired terminal point is to be reached for each link. Future activities will be to implement the MIMO adaptive FLC on an INTEL microcontroller-based circuit and to test the algorithm on a robotic system at the Mars Mission Research Center at North Carolina State University.

  6. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  7. Application of a zero-speed fin stabilizer for roll reduction of a marine robot near the surface

    NASA Astrophysics Data System (ADS)

    Gao, Yannan; Jin, Hongzhang; Zhou, Shengbin

    2012-06-01

    A zero-speed fin stabilizer system was developed for rolling control of a marine robot. As a robot steering device near the sea surface with low speed, it will have rolling motion due to disturbance from waves. Based on the working principle of a zero-speed fin stabilizer and a marine robot's dynamic properties, a roll damping controller was designed with a master-slave structure. It was composed of a sliding mode controller and an output tracking controller that calculates the desired righting moment and drives the zero-speed fin stabilizer. The methods of input-output linearization and model reference were used to realize the tracking control. Simulations were presented to demonstrate the validity of the control law proposed.

  8. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    NASA Astrophysics Data System (ADS)

    Hanford, Scott D.

    Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the

  9. An active view planning method for mobile robots using a trinocular visual sensor

    NASA Astrophysics Data System (ADS)

    Kim, Min Y.; Cho, Hyungsuck

    2003-10-01

    The ability of mobile robots to perceive and recognize environments is essential for autonomous navigation. To improve the performance of autonomous environment perception for mobile robots, it is important to effectively plan the next pose (position and orientation) of the sensor system at a current navigation state. In this paper, we propose a next-view-planning method for autonomous map construction needed for mobile robots with visual range sensor systems. The proposed view-planning method mimics the decision-making method of human beings, and uses the occlusion information reduced from the geometric relationship between the sensor view and objects as an important clue for the next sensor view planning. The proposed view-planning algorithms are developed in the following steps: 1) Given a prior map and range measurements sensed at a current location of the mobile robot, it is determined which parts in the map are interested in a view of solving the map uncertainty. 2) Based on the selected potential regions, some candidate poses of the sensor system for the next environment sensing are carefully generated. 3) The created candidates are evaluated by using a specially designed evaluation parameter, and the best one of them is selected as a next sensor position based on a fuzzy decision-making method. In this work, the principle of the view planning method is described in detail, and a series of experimental tests is performed to show the feasibility of the method for autonomous map building. For sensing the environments, an active trinocular vision sensor using laser structured light is utilized, which is mounted on the pan-tilt mechanism of the mobile robot, which is composed of a laser stripe projector and two cameras.

  10. Model Predictive Control considering Reachable Range of Wheels for Leg / Wheel Mobile Robots

    NASA Astrophysics Data System (ADS)

    Suzuki, Naito; Nonaka, Kenichiro; Sekiguchi, Kazuma

    2016-09-01

    Obstacle avoidance is one of the important tasks for mobile robots. In this paper, we study obstacle avoidance control for mobile robots equipped with four legs comprised of three DoF SCARA leg/wheel mechanism, which enables the robot to change its shape adapting to environments. Our previous method achieves obstacle avoidance by model predictive control (MPC) considering obstacle size and lateral wheel positions. However, this method does not ensure existence of joint angles which achieves reference wheel positions calculated by MPC. In this study, we propose a model predictive control considering reachable mobile ranges of wheels positions by combining multiple linear constraints, where each reachable mobile range is approximated as a convex trapezoid. Thus, we achieve to formulate a MPC as a quadratic problem with linear constraints for nonlinear problem of longitudinal and lateral wheel position control. By optimization of MPC, the reference wheel positions are calculated, while each joint angle is determined by inverse kinematics. Considering reachable mobile ranges explicitly, the optimal joint angles are calculated, which enables wheels to reach the reference wheel positions. We verify its advantages by comparing the proposed method with the previous method through numerical simulations.

  11. Performance analysis for stable mobile robot navigation solutions

    NASA Astrophysics Data System (ADS)

    Scrapper, Chris, Jr.; Madhavan, Raj; Balakirsky, Stephen

    2008-04-01

    Robot navigation in complex, dynamic and unstructured environments demands robust mapping and localization solutions. One of the most popular methods in recent years has been the use of scan-matching schemes where temporally correlated sensor data sets are registered for obtaining a Simultaneous Localization and Mapping (SLAM) navigation solution. The primary bottleneck of such scan-matching schemes is correspondence determination, i.e. associating a feature (structure) in one dataset to its counterpart in the other. Outliers, occlusions, and sensor noise complicate the determination of reliable correspondences. This paper describes testing scenarios being developed at NIST to analyze the performance of scan-matching algorithms. This analysis is critical for the development of practical SLAM algorithms in various application domains where sensor payload, wheel slippage, and power constraints impose severe restrictions. We will present results using a high-fidelity simulation testbed, the Unified System for Automation and Robot Simulation (USARSim).

  12. Mobile Robot for Exploring Cold Liquid/Solid Environments

    NASA Technical Reports Server (NTRS)

    Bergh, Charles; Zimmerman, Wayne

    2006-01-01

    The Planetary Autonomous Amphibious Robotic Vehicle (PAARV), now at the prototype stage of development, was originally intended for use in acquiring and analyzing samples of solid, liquid, and gaseous materials in cold environments on the shores and surfaces, and at shallow depths below the surfaces, of lakes and oceans on remote planets. The PAARV also could be adapted for use on Earth in similar exploration of cold environments in and near Arctic and Antarctic oceans and glacial and sub-glacial lakes.

  13. Non-destructive inspection in industrial equipment using robotic mobile manipulation

    NASA Astrophysics Data System (ADS)

    Maurtua, Iñaki; Susperregi, Loreto; Ansuategui, Ander; Fernández, Ane; Ibarguren, Aitor; Molina, Jorge; Tubio, Carlos; Villasante, Cristobal; Felsch, Torsten; Pérez, Carmen; Rodriguez, Jorge R.; Ghrissi, Meftah

    2016-05-01

    MAINBOT project has developed service robots based applications to autonomously execute inspection tasks in extensive industrial plants in equipment that is arranged horizontally (using ground robots) or vertically (climbing robots). The industrial objective has been to provide a means to help measuring several physical parameters in multiple points by autonomous robots, able to navigate and climb structures, handling non-destructive testing sensors. MAINBOT has validated the solutions in two solar thermal plants (cylindrical-parabolic collectors and central tower), that are very demanding from mobile manipulation point of view mainly due to the extension (e.g. a thermal solar plant of 50Mw, with 400 hectares, 400.000 mirrors, 180 km of absorber tubes, 140m height tower), the variability of conditions (outdoor, day-night), safety requirements, etc. Once the technology was validated in simulation, the system was deployed in real setups and different validation tests carried out. In this paper two of the achievements related with the ground mobile inspection system are presented: (1) Autonomous navigation localization and planning algorithms to manage navigation in huge extensions and (2) Non-Destructive Inspection operations: thermography based detection algorithms to provide automatic inspection abilities to the robots.

  14. A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II

    NASA Astrophysics Data System (ADS)

    Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki

    The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.

  15. Effect of Media Usage Selection on Social Mobilization Speed: Facebook vs E-Mail.

    PubMed

    Wang, Jing; Madnick, Stuart; Li, Xitong; Alstott, Jeff; Velu, Chander

    2015-01-01

    Social mobilization is a process that enlists a large number of people to achieve a goal within a limited time, especially through the use of social media. There is increasing interest in understanding the factors that affect the speed of social mobilization. Based on the Langley Knights competition data set, we analyzed the differences in mobilization speed between users of Facebook and e-mail. We include other factors that may influence mobilization speed (gender, age, timing, and homophily of information source) in our model as control variables in order to isolate the effect of such factors. We show that, in this experiment, although more people used e-mail to recruit, the mobilization speed of Facebook users was faster than that of those that used e-mail. We were also able to measure and show that the mobilization speed for Facebook users was on average seven times faster compared to e-mail before controlling for other factors. After controlling for other factors, we show that Facebook users were 1.84 times more likely to register compared to e-mail users in the next period if they have not done so at any point in time. This finding could provide useful insights for future social mobilization efforts.

  16. Optical 3D laser measurement system for navigation of autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Básaca-Preciado, Luis C.; Sergiyenko, Oleg Yu.; Rodríguez-Quinonez, Julio C.; García, Xochitl; Tyrsa, Vera V.; Rivas-Lopez, Moises; Hernandez-Balbuena, Daniel; Mercorelli, Paolo; Podrygalo, Mikhail; Gurko, Alexander; Tabakova, Irina; Starostenko, Oleg

    2014-03-01

    In our current research, we are developing a practical autonomous mobile robot navigation system which is capable of performing obstacle avoiding task on an unknown environment. Therefore, in this paper, we propose a robot navigation system which works using a high accuracy localization scheme by dynamic triangulation. Our two main ideas are (1) integration of two principal systems, 3D laser scanning technical vision system (TVS) and mobile robot (MR) navigation system. (2) Novel MR navigation scheme, which allows benefiting from all advantages of precise triangulation localization of the obstacles, mostly over known camera oriented vision systems. For practical use, mobile robots are required to continue their tasks with safety and high accuracy on temporary occlusion condition. Presented in this work, prototype II of TVS is significantly improved over prototype I of our previous publications in the aspects of laser rays alignment, parasitic torque decrease and friction reduction of moving parts. The kinematic model of the MR used in this work is designed considering the optimal data acquisition from the TVS with the main goal of obtaining in real time, the necessary values for the kinematic model of the MR immediately during the calculation of obstacles based on the TVS data.

  17. Situationally driven local navigation for mobile robots. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Slack, Marc Glenn

    1990-01-01

    For mobile robots to autonomously accommodate dynamically changing navigation tasks in a goal-directed fashion, they must employ navigation plans. Any such plan must provide for the robot's immediate and continuous need for guidance while remaining highly flexible in order to avoid costly computation each time the robot's perception of the world changes. Due to the world's uncertainties, creation and maintenance of navigation plans cannot involve arbitrarily complex processes, as the robot's perception of the world will be in constant flux, requiring modifications to be made quickly if they are to be of any use. This work introduces navigation templates (NaT's) which are building blocks for the construction and maintenance of rough navigation plans which capture the relationship that objects in the world have to the current navigation task. By encoding only the critical relationship between the objects in the world and the navigation task, a NaT-based navigation plan is highly flexible; allowing new constraints to be quickly incorporated into the plan and existing constraints to be updated or deleted from the plan. To satisfy the robot's need for immediate local guidance, the NaT's forming the current navigation plan are passed to a transformation function. The transformation function analyzes the plan with respect to the robot's current location to quickly determine (a few times a second) the locally preferred direction of travel. This dissertation presents NaT's and the transformation function as well as the needed support systems to demonstrate the usefulness of the technique for controlling the actions of a mobile robot operating in an uncertain world.

  18. Mobile phone camera benchmarking: combination of camera speed and image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  19. Collective odor source estimation and search in time-variant airflow environments using mobile robots.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  20. Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory

    SciTech Connect

    Manges, W.W.; Hamel, W.R.; Weisbin, C.R.; Einstein, R.; Burks, B.L.; Thompson, D.H.; Feezell, R.R.; Killough, S.M.

    1988-01-01

    The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-board NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.

  1. Automatic generation of modules of object categorization for autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Gorbenko, Anna

    2013-10-01

    Many robotic tasks require advanced systems of visual sensing. Robotic systems of visual sensing must be able to solve a number of different complex problems of visual data analysis. Object categorization is one of such problems. In this paper, we propose an approach to automatic generation of computationally effective modules of object categorization for autonomous mobile robots. This approach is based on the consideration of the stack cover problem. In particular, it is assumed that the robot is able to perform an initial inspection of the environment. After such inspection, the robot needs to solve the stack cover problem by using a supercomputer. A solution of the stack cover problem allows the robot to obtain a template for computationally effective scheduling of object categorization. Also, we consider an efficient approach to solve the stack cover problem. In particular, we consider an explicit reduction from the decision version of the stack cover problem to the satisfiability problem. For different satisfiability algorithms, the results of computational experiments are presented.

  2. 3D vision based on PMD-technology for mobile robots

    NASA Astrophysics Data System (ADS)

    Roth, Hubert J.; Schwarte, Rudolf; Ruangpayoongsak, Niramon; Kuhle, Joerg; Albrecht, Martin; Grothof, Markus; Hess, Holger

    2003-09-01

    A series of micro-robots (MERLIN: Mobile Experimental Robots for Locomotion and Intelligent Navigation) has been designed and implemented for a broad spectrum of indoor and outdoor tasks on basis of standardized functional modules like sensors, actuators, communication by radio link. The sensors onboard on the MERLIN robot can be divided into two categories: internal sensors for low-level control and for measuring the state of the robot and external sensors for obstacle detection, modeling of the environment and position estimation and navigation of the robot in a global co-ordinate system. The special emphasis of this paper is to describe the capabilities of MERLIN for obstacle detection, targets detection and for distance measurement. Besides ultrasonic sensors a new camera based on PMD-technology is used. This Photonic Mixer Device (PMD) represents a new electro-optic device that provides a smart interface between the world of incoherent optical signals and the world of their electronic signal processing. This PMD-technology directly enables 3D-imaging by means of the time-of-flight (TOF) principle. It offers an extremely high potential for new solutions in the robotics application field. The PMD-Technology opens up amazing new perspectives for obstacle detection systems, target acquisition as well as mapping of unknown environments.

  3. Terrain coverage of an unknown room by an autonomous mobile robot

    SciTech Connect

    VanderHeide, J.R.

    1995-12-05

    Terrain coverage problems are nearly as old as mankind: they were necessary early in our history for basic activities such as finding food and other necessities. As our societies and their associated machineries have grown more complex, we have not outgrown the need for this primitive skill. It is still used on a small scale for cleaning tasks and on a large scale for {open_quotes}search and report{close_quotes} missions of various kinds. The motivation for automating this process may not lie in the novelty of anything we might gain as an end product, but in freedom from something which we as humans find tedious, time-consuming and sometimes dangerous. Here we consider autonomous coverage of a terrain, typically indoor rooms, by a mobile robot that has no a priori model of the terrain. In evaluating its surroundings, the robot employs only inexpensive and commercially available ultrasonic and infrared sensors. The proposed solution is a basic step - a proof of principle - that can contribute to robots capable of autonomously performing tasks such as vacuum cleaning, mopping, radiation scanning, etc. The area of automatic terrain coverage and the closely related problem of terrain model acquisition have been studied both analytically and experimentally. Compared to the existing works, the following are three major distinguishing aspects of our study: (1) the theory is actually applied to an existing robot, (2) the robot has no a priori knowledge of the terrain, and (3) the robot can be realized relatively inexpensively.

  4. An optimal control strategy for collision avoidance of mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An optimal control formulation of the problem of collision avoidance of mobile robots in environments containing moving obstacles is presented. Collision avoidance is guaranteed if the minimum distance between the robot and the objects is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Furthermore, time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. Simulation results verify the value of the proposed strategy.

  5. Design considerations for an intelligent mobile robot for mixed-waste inspection

    SciTech Connect

    Sias, F.R.; Dawson, D.M.; Schalkoff, R.J.; Byrd, J.S.; Pettus, R.O.

    1993-06-01

    Large quantities of low-level radioactive waste are stored in steel drums at various Department of Energy (DOE) sites in the United States. Much of the stored waste qualifies as mixed waste and falls under Environmental Protection Agency (EPA) regulations that require periodic inspection. A semi-autonomous mobile robot is being developed during Phase 1 of a DOE contract to perform the inspection task and consequently reduce the radiation exposure of inspection personnel to ALARA (as low as reasonably achievable). The nature of the inspection process, the resulting robot design requirements, and the current status of the project are the subjects of this paper.

  6. Optimization of the computational load of a hypercube supercomputer onboard a mobile robot

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Toomarian, N.; Protopopescu, V.

    1987-01-01

    A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of single-neuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as precedence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.

  7. Optimization of the computational load of a hypercube supercomputer onboard a mobile robot.

    PubMed

    Barhen, J; Toomarian, N; Protopopescu, V

    1987-12-01

    A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of singleneuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as prec xdence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.

  8. Trial Development of a Mobile Feeding Assistive Robotic Arm for People with Physical Disabilities of the Extremities

    NASA Astrophysics Data System (ADS)

    Uehara, Hideyuki; Higa, Hiroki; Soken, Takashi; Namihira, Yoshinori

    A mobile feeding assistive robotic arm for people with physical disabilities of the extremities has been developed in this paper. This system is composed of a robotic arm, microcontroller, and its interface. The main unit of the robotic arm can be contained in a laptop computer's briefcase. Its weight is 5kg, including two 12-V lead acid rechargeable batteries. This robotic arm can be also mounted on a wheelchair. To verify performance of the mobile robotic arm system, drinking tea task was experimentally performed by two able-bodied subjects as well as three persons suffering from muscular dystrophy. From the experimental results, it was clear that they could smoothly carry out the drinking task, and that the robotic arm could firmly grasp a commercially available 500-ml plastic bottle. The eating task was also performed by the two able-bodied subjects. The experimental results showed that they could eat porridge by using a spoon without any difficulty.

  9. Distributed Planning and Control for Teams of Cooperating Mobile Robots

    SciTech Connect

    Parker, L.E.

    2004-06-15

    This CRADA project involved the cooperative research of investigators in ORNL's Center for Engineering Science Advanced Research (CESAR) with researchers at Caterpillar, Inc. The subject of the research was the development of cooperative control strategies for autonomous vehicles performing applications of interest to Caterpillar customers. The project involved three Phases of research, conducted over the time period of November 1998 through December 2001. This project led to the successful development of several technologies and demonstrations in realistic simulation that illustrated the effectiveness of the control approaches for distributed planning and cooperation in multi-robot teams.

  10. HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.

    PubMed

    Lin, Huei-Yung; Wang, Min-Liang

    2014-01-01

    In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317

  11. HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.

    PubMed

    Lin, Huei-Yung; Wang, Min-Liang

    2014-09-04

    In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.

  12. HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots

    PubMed Central

    Lin, Huei-Yung.; Wang, Min-Liang.

    2014-01-01

    In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317

  13. Distributing Planning and Control for Teams of Cooperating Mobile Robots

    SciTech Connect

    Parker, L.E.

    2004-07-19

    This CRADA project involved the cooperative research of investigators in ORNL's Center for Engineering Science Advanced Research (CESAR) with researchers at Caterpillar, Inc. The subject of the research was the development of cooperative control strategies for autonomous vehicles performing applications of interest to Caterpillar customers. The project involved three Phases of research, conducted over the time period of November 1998 through December 2001. This project led to the successful development of several technologies and demonstrations in realistic simulation that illustrated the effectiveness of our control approaches for distributed planning and cooperation in multi-robot teams. The primary objectives of this research project were to: (1) Develop autonomous control technologies to enable multiple vehicles to work together cooperatively, (2) Provide the foundational capabilities for a human operator to exercise oversight and guidance during the multi-vehicle task execution, and (3) Integrate these capabilities to the ALLIANCE-based autonomous control approach for multi-robot teams. These objectives have been successfully met with the results implemented and demonstrated in a near real-time multi-vehicle simulation of up to four vehicles performing mission-relevant tasks.

  14. Idiotypic immune networks in mobile-robot control.

    PubMed

    Whitbrook, Amanda M; Aickelin, Uwe; Garibaldi, Jonathan M

    2007-12-01

    Jerne's idiotypic-network theory postulates that the immune response involves interantibody stimulation and suppression, as well as matching to antigens. The theory has proved the most popular artificial immune system (AIS) model for incorporation into behavior-based robotics, but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with nonidiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic AIS network with a reinforcement-learning (RL)-based control system is described, and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic RL, a simplified hybrid AIS-RL that implements idiotypic selection independently of derived concentration levels, and a full hybrid AIS-RL scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.

  15. Dynamics and control for Constrained Multibody Systems modeled with Maggi's equation: Application to Differential Mobile Robots Partll

    NASA Astrophysics Data System (ADS)

    Amengonu, Yawo H.; Kakad, Yogendra P.

    2014-07-01

    Quasivelocity techniques were applied to derive the dynamics of a Differential Wheeled Mobile Robot (DWMR) in the companion paper. The present paper formulates a control system design for trajectory tracking of this class of robots. The method develops a feedback linearization technique for the nonlinear system using dynamic extension algorithm. The effectiveness of the nonlinear controller is illustrated with simulation example.

  16. Mechatronic demonstrator for testing sensors to be used in mobile robotics functioning on the inverted pendulum concept

    NASA Astrophysics Data System (ADS)

    Sandru, L.; Dolga, V.; Moldovan, C.; Savu, D.

    2016-08-01

    As the educational system is evolving, there are a lot of Mechatronic demonstrators used in schools and universities to demonstrate some technical, theoretical principle and analyzing new concept to apply this studied information, build practical hardware parts. The idea of using mobile robots for different applications is very common today. For choosing the best hardware and software configuration for the mobile robot it is necessary to make a documented analysis of the environment in which the mobile robot will perform. In our demonstrator we want to collect information from an optical sensor what can be used to maintain stability in a mobile robot equilibrium reading the reflected light from a surface. After hardware build we make a particularity study to see how optical sensors response in different ambient light and surface. To show some reference point we are collecting data from gyroscopic, accelerometer or rotation sensors.

  17. A 2D chaotic path planning for mobile robots accomplishing boundary surveillance missions in adversarial conditions

    NASA Astrophysics Data System (ADS)

    Curiac, Daniel-Ioan; Volosencu, Constantin

    2014-10-01

    The path-planning algorithm represents a crucial issue for every autonomous mobile robot. In normal circumstances a patrol robot will compute an optimal path to ensure its task accomplishment, but in adversarial conditions the problem is getting more complicated. Here, the robot’s trajectory needs to be altered into a misleading and unpredictable path to cope with potential opponents. Chaotic systems provide the needed framework for obtaining unpredictable motion in all of the three basic robot surveillance missions: area, points of interests and boundary monitoring. Proficient approaches have been provided for the first two surveillance tasks, but for boundary patrol missions no method has been reported yet. This paper addresses the mentioned research gap by proposing an efficient method, based on chaotic dynamic of the Hénon system, to ensure unpredictable boundary patrol on any shape of chosen closed contour.

  18. Building representations for the environment of a mobile robot from image data

    NASA Astrophysics Data System (ADS)

    Taylor, Camillo J.

    1992-02-01

    This paper presents an alternative approach to building representations for the environment of a mobile robot. This approach is based on recording the geometrical relationships between the observed features rather than their absolute position with respect to an arbitrary coordinate frame of reference. The resulting representation takes the form of a graph where the nodes represent the observed features and the edges represent the relationships between the features. This representation is particularly well suited for recognition tasks which can be reformulated as graph matching problems. Unlike Cartesian maps, relational maps can be built and maintained without any estimates for the position of the robot which means that the errors in this representation will be independent of any errors in the estimates for the robot position.

  19. Automatic detection and classification of obstacles with applications in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.

    2016-04-01

    Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.

  20. On the asynchronously continuous control of mobile robot movement by motor cortical spiking activity.

    PubMed

    Xu, Zhiming; So, Rosa Q; Toe, Kyaw Kyar; Ang, Kai Keng; Guan, Cuntai

    2014-01-01

    This paper presents an asynchronously intracortical brain-computer interface (BCI) which allows the subject to continuously drive a mobile robot. This system has a great implication for disabled patients to move around. By carefully designing a multiclass support vector machine (SVM), the subject's self-paced instantaneous movement intents are continuously decoded to control the mobile robot. In particular, we studied the stability of the neural representation of the movement directions. Experimental results on the nonhuman primate showed that the overt movement directions were stably represented in ensemble of recorded units, and our SVM classifier could successfully decode such movements continuously along the desired movement path. However, the neural representation of the stop state for the self-paced control was not stably represented and could drift. PMID:25570634

  1. Control of mechanical systems with rolling constraints: Application to dynamic control of mobile robots

    NASA Technical Reports Server (NTRS)

    Sarkar, Nilanjan; Yun, Xiaoping; Kumar, Vijay

    1994-01-01

    There are many examples of mechanical systems that require rolling contacts between two or more rigid bodies. Rolling contacts engender nonholonomic constraints in an otherwise holonomic system. In this article, we develop a unified approach to the control of mechanical systems subject to both holonomic and nonholonomic constraints. We first present a state space realization of a constrained system. We then discuss the input-output linearization and zero dynamics of the system. This approach is applied to the dynamic control of mobile robots. Two types of control algorithms for mobile robots are investigated: trajectory tracking and path following. In each case, a smooth nonlinear feedback is obtained to achieve asymptotic input-output stability and Lagrange stability of the overall system. Simulation results are presented to demonstrate the effectiveness of the control algorithms and to compare the performane of trajectory-tracking and path-following algorithms.

  2. Flexible Virtual Structure Consideration in Dynamic Modeling of Mobile Robots Formation

    NASA Astrophysics Data System (ADS)

    El Kamel, A. Essghaier; Beji, L.; Lerbet, J.; Abichou, A.

    2009-03-01

    In cooperative mobile robotics, we look for formation keeping and maintenance of a geometric configuration during movement. As a solution to these problems, the concept of a virtual structure is considered. Based on this idea, we have developed an efficient flexible virtual structure, describing the dynamic model of n vehicles in formation and where the whole formation is kept dependant. Notes that, for 2D and 3D space navigation, only a rigid virtual structure was proposed in the literature. Further, the problem was limited to a kinematic behavior of the structure. Hence, the flexible virtual structure in dynamic modeling of mobile robots formation presented in this paper, gives more capabilities to the formation to avoid obstacles in hostile environment while keeping formation and avoiding inter-agent collision.

  3. Integrating grid-based and topological maps for mobile robot navigation

    SciTech Connect

    Thrun, S.; Buecken, A.

    1996-12-31

    Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are considerably difficult to learn in large-scale environments. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms-grid-based and topological, the approach presented here gains the best of both worlds: accuracy/consistency and efficiency. The paper gives results for autonomously operating a mobile robot equipped with sonar sensors in populated multi-room environments.

  4. Ultra wide-band localization and SLAM: a comparative study for mobile robot navigation.

    PubMed

    Segura, Marcelo J; Auat Cheein, Fernando A; Toibero, Juan M; Mut, Vicente; Carelli, Ricardo

    2011-01-01

    In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work.

  5. Ultra wide-band localization and SLAM: a comparative study for mobile robot navigation.

    PubMed

    Segura, Marcelo J; Auat Cheein, Fernando A; Toibero, Juan M; Mut, Vicente; Carelli, Ricardo

    2011-01-01

    In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work. PMID:22319397

  6. Ultra Wide-Band Localization and SLAM: A Comparative Study for Mobile Robot Navigation

    PubMed Central

    Segura, Marcelo J.; Auat Cheein, Fernando A.; Toibero, Juan M.; Mut, Vicente; Carelli, Ricardo

    2011-01-01

    In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work. PMID:22319397

  7. On the asynchronously continuous control of mobile robot movement by motor cortical spiking activity.

    PubMed

    Xu, Zhiming; So, Rosa Q; Toe, Kyaw Kyar; Ang, Kai Keng; Guan, Cuntai

    2014-01-01

    This paper presents an asynchronously intracortical brain-computer interface (BCI) which allows the subject to continuously drive a mobile robot. This system has a great implication for disabled patients to move around. By carefully designing a multiclass support vector machine (SVM), the subject's self-paced instantaneous movement intents are continuously decoded to control the mobile robot. In particular, we studied the stability of the neural representation of the movement directions. Experimental results on the nonhuman primate showed that the overt movement directions were stably represented in ensemble of recorded units, and our SVM classifier could successfully decode such movements continuously along the desired movement path. However, the neural representation of the stop state for the self-paced control was not stably represented and could drift.

  8. A Petri-net coordination model for an intelligent mobile robot

    NASA Technical Reports Server (NTRS)

    Wang, F.-Y.; Kyriakopoulos, K. J.; Tsolkas, A.; Saridis, G. N.

    1990-01-01

    The authors present a Petri net model of the coordination level of an intelligent mobile robot system (IMRS). The purpose of this model is to specify the integration of the individual efforts on path planning, supervisory motion control, and vision systems that are necessary for the autonomous operation of the mobile robot in a structured dynamic environment. This is achieved by analytically modeling the various units of the system as Petri net transducers and explicitly representing the task precedence and information dependence among them. The model can also be used to simulate the task processing and to evaluate the efficiency of operations and the responsibility of decisions in the coordination level of the IMRS. Some simulation results on the task processing and learning are presented.

  9. Evolution of Implicit and Explicit Communication in Mobile Robots

    NASA Astrophysics Data System (ADS)

    de Greeff, Joachim; Nolfi, Stefano

    This work investigates the conditions in which a population of embodied agents evolved for the ability to display coordinated/cooperative skills can develop an ability to communicate, whether and to what extent the evolved communication system can complexify during the course of the evolutionary process, and how the characteristics of such communication system varies evolutionarily. The analysis of the obtained results indicates that evolving robots develop a capacity to access/generate information which has a communicative value, an ability to produce different signals encoding useful regularities, and an ability to react appropriately to explicit and implicit signals. The analysis of the obtained results allows us to formulate detailed hypothesis on the evolution of communication for what concern aspects such us: (i) how communication can emerge from a population of initially non-communicating agents, (ii) how communication systems can complexify, (iii) how signals/meanings can originate and how they can be grounded in agents' sensory-motor states.

  10. A Computer Vision For Navigation Of Mobile Robots

    NASA Astrophysics Data System (ADS)

    Miaoliang, Zhu; Zhijun, He

    1987-01-01

    A computer system which can find the pathway in the robot's sensor field of view with only one camera has been developed. The low level extraction of linear segmentations is performed by edge following algorithm with the local gray level and gradient as heuristics. The symbolic lists which describe the optical and geometric properties of linear segmentations are generated as the output of the low level and submit to the high level. In high level stage a reasoning mechanism whose knowledge base consists of the rules modeling three basic types of pathway segments (non-branch, branch, and end) works on the symbolic lists to make the judgement of which type to be happened in the sensor area. Several parameters such as the exact positions of the pathway can be calculated as the signals for navigation control system. The experiments showed that the system behaved specially well for the existence of noise. shadow. and irregularities of environments.

  11. Vision-aided inertial navigation system for robotic mobile mapping

    NASA Astrophysics Data System (ADS)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  12. Formation tracker design of multiple mobile robots with wheel perturbations: adaptive output-feedback approach

    NASA Astrophysics Data System (ADS)

    Yoo, Sung Jin

    2016-11-01

    This paper presents a theoretical design approach for output-feedback formation tracking of multiple mobile robots under wheel perturbations. It is assumed that these perturbations are unknown and the linear and angular velocities of the robots are unmeasurable. First, adaptive state observers for estimating unmeasurable velocities of the robots are developed under the robots' kinematics and dynamics including wheel perturbation effects. Then, we derive a virtual-structure-based formation tracker scheme according to the observer dynamic surface design procedure. The main difficulty of the output-feedback control design is to manage the coupling problems between unmeasurable velocities and unknown wheel perturbation effects. These problems are avoided by using the adaptive technique and the function approximation property based on fuzzy logic systems. From the Lyapunov stability analysis, it is shown that point tracking errors of each robot and synchronisation errors for the desired formation converge to an adjustable neighbourhood of the origin, while all signals in the controlled closed-loop system are semiglobally uniformly ultimately bounded.

  13. (abstract) A Mobile Robot for Remote Response to Incidents Involving Hazardous Materials

    NASA Technical Reports Server (NTRS)

    Welch, Richard V.

    1994-01-01

    This paper will report the status of the Emergency Response Robotics project, a teleoperated mobile robot system being developed at JPL for use by the JPL Fire Department/HAZMAT Team. The project, which began in 1991, has been focused on developing a robotic vehicle which can be quickly deployed by HAZMAT Team personnel for first entry into an incident site. The primary goals of the system are to gain access to the site, locate and identify the hazard, and aid in its mitigation. The involvement of JPL Fire Department/HAZMAT Team personnel has been critical in guiding the design and evaluation of the system. A unique feature of the current robot, called HAZBOT III, is its special design for operation in combustible environments. This includes the use of all solid state electronics, brushless motors, and internal pressurization. Demonstration and testing of the system with HAZMAT Team personnel has shown that teleoperated robots, such as HAZBOT III, can successfully gain access to incident sites locating and identifying hazardous material spills. Work is continuing to enable more complex missions through the addition of appropriate sensor technology and enhancement of the operator interface.

  14. Low-cost mobile robot. Final report, April-October 1987

    SciTech Connect

    Evans, J.M.

    1987-10-07

    In the 1986 SBIR solicitation, DARPA identified a need for a low-cost mobile robot for laboratory research use. This report covers the work by Transitions Research Corporation in addressing this opportunity under a Phase I SBIR contract. TRC has developed technical requirements and selected a vehicle design, a trajectory control system, and a communications system to meet those requirements. Use of a common research vehicle base would reduce the time and cost to carry out experiments in navigation of autonomous mobile vehicles, would expedite sharing and comparing of research results, and would advance the understanding of the evolution of artificial intelligence.

  15. Two-dimensional radial laser scanning for circular marker detection and external mobile robot tracking.

    PubMed

    Teixidó, Mercè; Pallejà, Tomàs; Font, Davinia; Tresanchez, Marcel; Moreno, Javier; Palacín, Jordi

    2012-01-01

    This paper presents the use of an external fixed two-dimensional laser scanner to detect cylindrical targets attached to moving devices, such as a mobile robot. This proposal is based on the detection of circular markers in the raw data provided by the laser scanner by applying an algorithm for outlier avoidance and a least-squares circular fitting. Some experiments have been developed to empirically validate the proposal with different cylindrical targets in order to estimate the location and tracking errors achieved, which are generally less than 20 mm in the area covered by the laser sensor. As a result of the validation experiments, several error maps have been obtained in order to give an estimate of the uncertainty of any location computed. This proposal has been validated with a medium-sized mobile robot with an attached cylindrical target (diameter 200 mm). The trajectory of the mobile robot was estimated with an average location error of less than 15 mm, and the real location error in each individual circular fitting was similar to the error estimated with the obtained error maps. The radial area covered in this validation experiment was up to 10 m, a value that depends on the radius of the cylindrical target and the radial density of the distance range points provided by the laser scanner but this area can be increased by combining the information of additional external laser scanners. PMID:23443390

  16. Two-Dimensional Radial Laser Scanning for Circular Marker Detection and External Mobile Robot Tracking

    PubMed Central

    Teixidó, Mercè; Pallejà, Tomàs; Font, Davinia; Tresanchez, Marcel; Moreno, Javier; Palacín, Jordi

    2012-01-01

    This paper presents the use of an external fixed two-dimensional laser scanner to detect cylindrical targets attached to moving devices, such as a mobile robot. This proposal is based on the detection of circular markers in the raw data provided by the laser scanner by applying an algorithm for outlier avoidance and a least-squares circular fitting. Some experiments have been developed to empirically validate the proposal with different cylindrical targets in order to estimate the location and tracking errors achieved, which are generally less than 20 mm in the area covered by the laser sensor. As a result of the validation experiments, several error maps have been obtained in order to give an estimate of the uncertainty of any location computed. This proposal has been validated with a medium-sized mobile robot with an attached cylindrical target (diameter 200 mm). The trajectory of the mobile robot was estimated with an average location error of less than 15 mm, and the real location error in each individual circular fitting was similar to the error estimated with the obtained error maps. The radial area covered in this validation experiment was up to 10 m, a value that depends on the radius of the cylindrical target and the radial density of the distance range points provided by the laser scanner but this area can be increased by combining the information of additional external laser scanners. PMID:23443390

  17. Two-dimensional radial laser scanning for circular marker detection and external mobile robot tracking.

    PubMed

    Teixidó, Mercè; Pallejà, Tomàs; Font, Davinia; Tresanchez, Marcel; Moreno, Javier; Palacín, Jordi

    2012-11-28

    This paper presents the use of an external fixed two-dimensional laser scanner to detect cylindrical targets attached to moving devices, such as a mobile robot. This proposal is based on the detection of circular markers in the raw data provided by the laser scanner by applying an algorithm for outlier avoidance and a least-squares circular fitting. Some experiments have been developed to empirically validate the proposal with different cylindrical targets in order to estimate the location and tracking errors achieved, which are generally less than 20 mm in the area covered by the laser sensor. As a result of the validation experiments, several error maps have been obtained in order to give an estimate of the uncertainty of any location computed. This proposal has been validated with a medium-sized mobile robot with an attached cylindrical target (diameter 200 mm). The trajectory of the mobile robot was estimated with an average location error of less than 15 mm, and the real location error in each individual circular fitting was similar to the error estimated with the obtained error maps. The radial area covered in this validation experiment was up to 10 m, a value that depends on the radius of the cylindrical target and the radial density of the distance range points provided by the laser scanner but this area can be increased by combining the information of additional external laser scanners.

  18. Neuromodulated Neural Hardware and Its Implementation on an Autonomous Mobile Robot

    NASA Astrophysics Data System (ADS)

    Tokura, Seiji; Ishiguro, Akio; Okuma, Shigeru

    In order to construct truly autonomous mobile robots, the concept of packaging is highly indispensable: all parts such as controllers, power systems, and batteries should be embedded inside a body. Therefore, implementing a controller on hardware is one of the most promising ways, since this contributes to low power consumption, miniaturization, and so on. Another crucial requirement in the field of autonomous mobile robots is robustness. That is, autonomous mobile robots have to cope with their unpredictably changing environments in real time. In this study, to meet these requirements the concept of Dynamically Rearrangeable Electrical Circuit(DREC) is proposed. In addition, we implement DREC onto FPGAs as physical electronic circuits by using the diffusion-reaction mechanism of neuromodulation which is widely observed in biological nervous systems. We developed the DREC for the peg-pushing task as a practical example. We confirmed that the physical DREC can successfully regulate the behavior according to the situation by changing its properties in real time.

  19. Application of a model of instrumental conditioning to mobile robot control

    NASA Astrophysics Data System (ADS)

    Saksida, Lisa M.; Touretzky, D. S.

    1997-09-01

    Instrumental conditioning is a psychological process whereby an animal learns to associate its actions with their consequences. This type of learning is exploited in animal training techniques such as 'shaping by successive approximations,' which enables trainers to gradually adjust the animal's behavior by giving strategically timed reinforcements. While this is similar in principle to reinforcement learning, the real phenomenon includes many subtle effects not considered in the machine learning literature. In addition, a good deal of domain information is utilized by an animal learning a new task; it does not start from scratch every time it learns a new behavior. For these reasons, it is not surprising that mobile robot learning algorithms have yet to approach the sophistication and robustness of animal learning. A serious attempt to model instrumental learning could prove fruitful for improving machine learning techniques. In the present paper, we develop a computational theory of shaping at a level appropriate for controlling mobile robots. The theory is based on a series of mechanisms for 'behavior editing,' in which pre-existing behaviors, either innate or previously learned, can be dramatically changed in magnitude, shifted in direction, or otherwise manipulated so as to produce new behavioral routines. We have implemented our theory on Amelia, an RWI B21 mobile robot equipped with a gripper and color video camera. We provide results from training Amelia on several tasks, all of which were constructed as variations of one innate behavior, object-pursuit.

  20. Relation between repeatability and speed of robot-based systems for composite aircraft production through multilateration sensor system

    NASA Astrophysics Data System (ADS)

    Bock, M.; Perner, M.; Krombholz, C.; Beykirch, B.

    2015-03-01

    Fiber composites are becoming increasingly important in different fields of lightweight application. To guarantee the estimated demand of components made of carbon fiber reinforced plastics the use of industrial robots is suggested in production. High velocity of the layup process is addressed to significantly increase the production rate. Today, the layup of the fiber material is performed by gantry systems. They are heavy weight, slow and the variety of possible part shapes is limited. Articulated robots offer a huge operational area in relation to their construction size. Moreover, they are flexible enough to layup fiber material into different shaped molds. Thus, standard articulated robots are less accurate and more susceptible to vibration than gantry systems. Therefore, this paper illustrates an approach to classify volumetric errors to obtain a relation between the achievable speed in production and precision. The prediction of a precision at a defined speed is the result. Based on the measurement results the repeatability of the robotic unit within the workspace is calculated and presented. At the minimum speed that is applicable in production the repeatability is less than 30 mm. Subsequently, an online strategy for path error compensation is presented. The approach uses a multilateration system that consists of four laser tracer units and measures the current absolute position of a reflector mounted at the end-effector of the robot. By calculating the deviation between the planned and the actual position a compensated motion is applied. The paper concludes with a discussion for further investigations.

  1. Gait speed using powered robotic exoskeletons after spinal cord injury: a systematic review and correlational study.

    PubMed

    Louie, Dennis R; Eng, Janice J; Lam, Tania

    2015-01-01

    Powered robotic exoskeletons are an emerging technology of wearable orthoses that can be used as an assistive device to enable non-ambulatory individuals with spinal cord injury (SCI) to walk, or as a rehabilitation tool to improve walking ability in ambulatory individuals with SCI. No studies to date have systematically reviewed the literature on the efficacy of powered exoskeletons on restoring walking function. Our objective was to systematically review the literature to determine the gait speed attained by individuals with SCI when using a powered exoskeleton to walk, factors influencing this speed, and characteristics of studies involving a powered exoskeleton (e.g. inclusion criteria, screening, and training processes). A systematic search in computerized databases was conducted to identify articles that reported on walking outcomes when using a powered exoskeleton. Individual gait speed data from each study was extracted. Pearson correlations were performed between gait speed and 1) age, 2) years post-injury, 3) injury level, and 4) number of training sessions. Fifteen articles met inclusion criteria, 14 of which investigated the powered exoskeleton as an assistive device for non-ambulatory individuals and one which used it as a training intervention for ambulatory individuals with SCI. The mean gait speed attained by non-ambulatory participants (n = 84) while wearing a powered exoskeleton was 0.26 m/s, with the majority having a thoracic-level motor-complete injury. Twelve articles reported individual data for the non-ambulatory participants, from which a positive correlation was found between gait speed and 1) age (r = 0.27, 95 % CI 0.02-0.48, p = 0.03, 63 participants), 2) injury level (r = 0.27, 95 % CI 0.02-0.48, p = 0.03, 63 participants), and 3) training sessions (r = 0.41, 95 % CI 0.16-0.61, p = 0.002, 55 participants). In conclusion, powered exoskeletons can provide non-ambulatory individuals with thoracic-level motor

  2. Applying High-Speed Vision Sensing to an Industrial Robot for High-Performance Position Regulation under Uncertainties.

    PubMed

    Huang, Shouren; Bergström, Niklas; Yamakawa, Yuji; Senoo, Taku; Ishikawa, Masatoshi

    2016-07-29

    It is traditionally difficult to implement fast and accurate position regulation on an industrial robot in the presence of uncertainties. The uncertain factors can be attributed either to the industrial robot itself (e.g., a mismatch of dynamics, mechanical defects such as backlash, etc.) or to the external environment (e.g., calibration errors, misalignment or perturbations of a workpiece, etc.). This paper proposes a systematic approach to implement high-performance position regulation under uncertainties on a general industrial robot (referred to as the main robot) with minimal or no manual teaching. The method is based on a coarse-to-fine strategy that involves configuring an add-on module for the main robot's end effector. The add-on module consists of a 1000 Hz vision sensor and a high-speed actuator to compensate for accumulated uncertainties. The main robot only focuses on fast and coarse motion, with its trajectories automatically planned by image information from a static low-cost camera. Fast and accurate peg-and-hole alignment in one dimension was implemented as an application scenario by using a commercial parallel-link robot and an add-on compensation module with one degree of freedom (DoF). Experimental results yielded an almost 100% success rate for fast peg-in-hole manipulation (with regulation accuracy at about 0.1 mm) when the workpiece was randomly placed.

  3. Applying High-Speed Vision Sensing to an Industrial Robot for High-Performance Position Regulation under Uncertainties.

    PubMed

    Huang, Shouren; Bergström, Niklas; Yamakawa, Yuji; Senoo, Taku; Ishikawa, Masatoshi

    2016-01-01

    It is traditionally difficult to implement fast and accurate position regulation on an industrial robot in the presence of uncertainties. The uncertain factors can be attributed either to the industrial robot itself (e.g., a mismatch of dynamics, mechanical defects such as backlash, etc.) or to the external environment (e.g., calibration errors, misalignment or perturbations of a workpiece, etc.). This paper proposes a systematic approach to implement high-performance position regulation under uncertainties on a general industrial robot (referred to as the main robot) with minimal or no manual teaching. The method is based on a coarse-to-fine strategy that involves configuring an add-on module for the main robot's end effector. The add-on module consists of a 1000 Hz vision sensor and a high-speed actuator to compensate for accumulated uncertainties. The main robot only focuses on fast and coarse motion, with its trajectories automatically planned by image information from a static low-cost camera. Fast and accurate peg-and-hole alignment in one dimension was implemented as an application scenario by using a commercial parallel-link robot and an add-on compensation module with one degree of freedom (DoF). Experimental results yielded an almost 100% success rate for fast peg-in-hole manipulation (with regulation accuracy at about 0.1 mm) when the workpiece was randomly placed. PMID:27483274

  4. Sensors Fusion based Online Mapping and Features Extraction of Mobile Robot in the Road Following and Roundabout

    NASA Astrophysics Data System (ADS)

    Ali, Mohammed A. H.; Mailah, Musa; Yussof, Wan Azhar B.; Hamedon, Zamzuri B.; Yussof, Zulkifli B.; Majeed, Anwar P. P.

    2016-02-01

    A road feature extraction based mapping system using a sensor fusion technique for mobile robot navigation in road environments is presented in this paper. The online mapping of mobile robot is performed continuously in the road environments to find the road properties that enable the robot to move from a certain start position to pre-determined goal while discovering and detecting the roundabout. The sensors fusion involving laser range finder, camera and odometry which are installed in a new platform, are used to find the path of the robot and localize it within its environments. The local maps are developed using camera and laser range finder to recognize the roads borders parameters such as road width, curbs and roundabout. Results show the capability of the robot with the proposed algorithms to effectively identify the road environments and build a local mapping for road following and roundabout.

  5. Simulation of cooperating robot manipulators on a mobile platform

    NASA Technical Reports Server (NTRS)

    Murphy, Steve H.; Wen, John T.; Saridis, George N.

    1990-01-01

    The dynamic equations of motion for two manipulators holding a common object on a freely moving mobile platform are developed. The full dynamic interactions from arms to platform and arm-tip to arm-tip are included in the formulation. The development of the closed chain dynamics allows for the use of any solution for the open topological tree of base and manipulator links. In particular, because the system has 18 degrees of freedom, recursive solutions for the dynamic simulation become more promising for efficient calculations of the motion. Simulation of the system is accomplished through a MATLAB program, and the response is visualized graphically using the SILMA Cimstation.

  6. Recognition and Classification of Road Condition on the Basis of Friction Force by Using a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Watanabe, Tatsuhito; Katsura, Seiichiro

    A person operating a mobile robot in a remote environment receives realistic visual feedback about the condition of the road on which the robot is moving. The categorization of the road condition is necessary to evaluate the conditions for safe and comfortable driving. For this purpose, the mobile robot should be capable of recognizing and classifying the condition of the road surfaces. This paper proposes a method for recognizing the type of road surfaces on the basis of the friction between the mobile robot and the road surfaces. This friction is estimated by a disturbance observer, and a support vector machine is used to classify the surfaces. The support vector machine identifies the type of the road surface using feature vector, which is determined using the arithmetic average and variance derived from the torque values. Further, these feature vectors are mapped onto a higher dimensional space by using a kernel function. The validity of the proposed method is confirmed by experimental results.

  7. Mobile robots traversability awareness based on terrain visual sensory data fusion

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir

    2007-04-01

    In this paper, we have presented methods that significantly improve the robot awareness of its terrain traversability conditions. The terrain traversability awareness is achieved by association of terrain image appearances from different poses and fusion of extracted information from multimodality imaging and range sensor data for localization and clustering environment landmarks. Initially, we describe methods for extraction of salient features of the terrain for the purpose of landmarks registration from two or more images taken from different via points along the trajectory path of the robot. The method of image registration is applied as a means of overlaying (two or more) of the same terrain scene at different viewpoints. The registration geometrically aligns salient landmarks of two images (the reference and sensed images). A Similarity matching techniques is proposed for matching the terrain salient landmarks. Secondly, we present three terrain classifier models based on rule-based, supervised neural network, and fuzzy logic for classification of terrain condition under uncertainty and mapping the robot's terrain perception to apt traversability measures. This paper addresses the technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on

  8. Implementation of obstacle-avoidance control for an autonomous omni-directional mobile robot based on extension theory.

    PubMed

    Pai, Neng-Sheng; Hsieh, Hung-Hui; Lai, Yi-Chung

    2012-01-01

    The paper demonstrates a following robot with omni-directional wheels, which is able to take action to avoid obstacles. The robot design is based on both fuzzy and extension theory. Fuzzy theory was applied to tune the PMW signal of the motor revolution, and correct path deviation issues encountered when the robot is moving. Extension theory was used to build a robot obstacle-avoidance model. Various mobile models were developed to handle different types of obstacles. The ultrasonic distance sensors mounted on the robot were used to estimate the distance to obstacles. If an obstacle is encountered, the correlation function is evaluated and the robot avoids the obstacle autonomously using the most appropriate mode. The effectiveness of the proposed approach was verified through several tracking experiments, which demonstrates the feasibility of a fuzzy path tracker as well as the extensible collision avoidance system. PMID:23202029

  9. Implementation of obstacle-avoidance control for an autonomous omni-directional mobile robot based on extension theory.

    PubMed

    Pai, Neng-Sheng; Hsieh, Hung-Hui; Lai, Yi-Chung

    2012-10-16

    The paper demonstrates a following robot with omni-directional wheels, which is able to take action to avoid obstacles. The robot design is based on both fuzzy and extension theory. Fuzzy theory was applied to tune the PMW signal of the motor revolution, and correct path deviation issues encountered when the robot is moving. Extension theory was used to build a robot obstacle-avoidance model. Various mobile models were developed to handle different types of obstacles. The ultrasonic distance sensors mounted on the robot were used to estimate the distance to obstacles. If an obstacle is encountered, the correlation function is evaluated and the robot avoids the obstacle autonomously using the most appropriate mode. The effectiveness of the proposed approach was verified through several tracking experiments, which demonstrates the feasibility of a fuzzy path tracker as well as the extensible collision avoidance system.

  10. Implementation of Obstacle-Avoidance Control for an Autonomous Omni-Directional Mobile Robot Based on Extension Theory

    PubMed Central

    Pai, Neng-Sheng; Hsieh, Hung-Hui; Lai, Yi-Chung

    2012-01-01

    The paper demonstrates a following robot with omni-directional wheels, which is able to take action to avoid obstacles. The robot design is based on both fuzzy and extension theory. Fuzzy theory was applied to tune the PMW signal of the motor revolution, and correct path deviation issues encountered when the robot is moving. Extension theory was used to build a robot obstacle-avoidance model. Various mobile models were developed to handle different types of obstacles. The ultrasonic distance sensors mounted on the robot were used to estimate the distance to obstacles. If an obstacle is encountered, the correlation function is evaluated and the robot avoids the obstacle autonomously using the most appropriate mode. The effectiveness of the proposed approach was verified through several tracking experiments, which demonstrates the feasibility of a fuzzy path tracker as well as the extensible collision avoidance system. PMID:23202029

  11. Task performance evaluation of asymmetric semiautonomous teleoperation of mobile twin-arm robotic manipulators.

    PubMed

    Malysz, Pawel; Sirouspour, Shahin

    2013-01-01

    A series of human factors experiments involving maneuvering and grasping tasks are carried out to evaluate the effectiveness of a novel asymmetric semiautonomous teleoperation (AST) control design framework for teleoperation of mobile twin-arm robotic manipulators. Simplified configurations are examined first to explore control strategies for different aspects of such teleoperation tasks. These include teleoperation of a nonholonomic mobile base, telemanipulation of a dual-arm robot, and dual-arm/dual-operator teleoperation task scenarios. In two sets of experiments with a planar nonholonomic mobile base, teleoperation via a 3DOF planar haptic interface with position mapping and force reflection of the nonholonomic constraint decreases task-completion-time (TCT) and reduces unwanted collisions. In dual-arm and dual-operator teleoperation maneuverability experiments, the assignment of decoupled and nonconflicting control frames reduces TCT and unwanted contacts. The use of so-called "soft" constraints via passive semiautonomous control reduces TCT and unwanted block drops in telegrasping experiments with a twin-arm manipulator. A final comprehensive experiment encompassing elements of the simplified configurations demonstrates the effectiveness of AST control framework in dual-operator teleoperation of a twin-arm mobile manipulator. PMID:24808400

  12. Time optimal trajectories for mobile robots with two independently driven wheels

    SciTech Connect

    Reister, D.B.; Pin, F.G.

    1992-03-01

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin's maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the acceleration on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.

  13. Time optimal trajectories for mobile robots with two independently driven wheels

    SciTech Connect

    Reister, D.B.; Pin, F.G.

    1992-03-01

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin`s maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the acceleration on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.

  14. Mag-muBots: Magnetic micro-robots capable of mobility, manipulation, and modularity

    NASA Astrophysics Data System (ADS)

    Pawashe, Chytra Shashikant

    Micro-robots are mobile devices that operate in micro-scale environments, and have future applications, such as being used to manipulate or construct micro-devices, and being used as diagnostic and analysis tools in biological systems. Being sub-millimeter in size, micro-robots require very different approaches to fabricating, powering, and controlling them. As opposed to conventional large-scale robots, it is infeasible to integrate conventional-style motors, actuators, and power sources into micro-scale devices. In this work, the Magnetic Micro-Robot (Mag-muBot) is presented, which is a versatile permanent magnet-based mobile robot under 1 mm in all dimensions. External magnetic fields are employed to successfully deliver power and control to the Mag-muBot, which is mobile and can operate in both gases and liquids, and on unstructured surfaces. Its motion is achieved by oscillating magnetic fields, which induces a stick-slip walking behavior; these dynamics are modeled into a simulation that compares favorably to experiments. The mechanisms for the manipulation of micro-objects are also explored, where the Mag-muBot can directly push micro-objects by contact-manipulation, or generate fluid boundary layers to manipulate micro-objects without direct contact. Examples of micro-object manipulation are also provided, where objects from 50 mum to 900 mum are shown to be manipulated. Additionally, control topics are explored such as addressing multiple Mag-muBots on a surface, which is accomplished by utilizing electrostatic forces generated by a specialized surface that selectively immobilizes individual Mag-muBots, allowing for decoupled serial locomotion of multiple Mag-muBots. Furthermore, autonomous control algorithms are developed such that the Mag-muBot can autonomously be positioned in the workspace, plan around obstacles, and efficiently manipulate micro-objects in the environment. Finally, a micro-scale reconfigurable modular robotic system is developed, based

  15. Adaptive Control for Autonomous Navigation of Mobile Robots Considering Time Delay and Uncertainty

    NASA Astrophysics Data System (ADS)

    Armah, Stephen Kofi

    Autonomous control of mobile robots has attracted considerable attention of researchers in the areas of robotics and autonomous systems during the past decades. One of the goals in the field of mobile robotics is development of platforms that robustly operate in given, partially unknown, or unpredictable environments and offer desired services to humans. Autonomous mobile robots need to be equipped with effective, robust and/or adaptive, navigation control systems. In spite of enormous reported work on autonomous navigation control systems for mobile robots, achieving the goal above is still an open problem. Robustness and reliability of the controlled system can always be improved. The fundamental issues affecting the stability of the control systems include the undesired nonlinear effects introduced by actuator saturation, time delay in the controlled system, and uncertainty in the model. This research work develops robustly stabilizing control systems by investigating and addressing such nonlinear effects through analytical, simulations, and experiments. The control systems are designed to meet specified transient and steady-state specifications. The systems used for this research are ground (Dr Robot X80SV) and aerial (Parrot AR.Drone 2.0) mobile robots. Firstly, an effective autonomous navigation control system is developed for X80SV using logic control by combining 'go-to-goal', 'avoid-obstacle', and 'follow-wall' controllers. A MATLAB robot simulator is developed to implement this control algorithm and experiments are conducted in a typical office environment. The next stage of the research develops an autonomous position (x, y, and z) and attitude (roll, pitch, and yaw) controllers for a quadrotor, and PD-feedback control is used to achieve stabilization. The quadrotor's nonlinear dynamics and kinematics are implemented using MATLAB S-function to generate the state output. Secondly, the white-box and black-box approaches are used to obtain a linearized

  16. Real-time multiple human perception with color-depth cameras on a mobile robot.

    PubMed

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  17. Real-time multiple human perception with color-depth cameras on a mobile robot.

    PubMed

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  18. Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie

    2003-01-01

    A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.

  19. A telepresence mobile robot controlled with a noninvasive brain-computer interface.

    PubMed

    Escolano, Carlos; Antelis, Javier Mauricio; Minguez, Javier

    2012-06-01

    This paper reports an electroencephalogram-based brain-actuated telepresence system to provide a user with presence in remote environments through a mobile robot, with access to the Internet. This system relies on a P300-based brain-computer interface (BCI) and a mobile robot with autonomous navigation and camera orientation capabilities. The shared-control strategy is built by the BCI decoding of task-related orders (selection of visible target destinations or exploration areas), which can be autonomously executed by the robot. The system was evaluated using five healthy participants in two consecutive steps: 1) screening and training of participants and 2) preestablished navigation and visual exploration telepresence tasks. On the basis of the results, the following evaluation studies are reported: 1) technical evaluation of the device and its main functionalities and 2) the users' behavior study. The overall result was that all participants were able to complete the designed tasks, reporting no failures, which shows the robustness of the system and its feasibility to solve tasks in real settings where joint navigation and visual exploration were needed. Furthermore, the participants showed great adaptation to the telepresence system.

  20. Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements

    PubMed Central

    Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962

  1. Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.

    PubMed

    Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.

  2. Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.

    PubMed

    Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962

  3. Application of historical mobility testing to sensor-based robotic performance

    NASA Astrophysics Data System (ADS)

    Willoughby, William E.; Jones, Randolph A.; Mason, George L.; Shoop, Sally A.; Lever, James H.

    2006-05-01

    The USA Engineer Research and Development Center (ERDC) has conducted on-/off-road experimental field testing with full-sized and scale-model military vehicles for more than fifty years. Some 4000 acres of local terrain are available for tailored field evaluations or verification/validation of future robotic designs in a variety of climatic regimes. Field testing and data collection procedures, as well as techniques for quantifying terrain in engineering terms, have been developed and refined into algorithms and models for predicting vehicle-terrain interactions and resulting forces or speeds of military-sized vehicles. Based on recent experiments with Matilda, Talon, and Pacbot, these predictive capabilities appear to be relevant to most robotic systems currently in development. Utilization of current testing capabilities with sensor-based vehicle drivers, or use of the procedures for terrain quantification from sensor data, would immediately apply some fifty years of historical knowledge to the development, refinement, and implementation of future robotic systems. Additionally, translation of sensor-collected terrain data into engineering terms would allow assessment of robotic performance a priori deployment of the actual system and ensure maximum system performance in the theater of operation.

  4. Applying High-Speed Vision Sensing to an Industrial Robot for High-Performance Position Regulation under Uncertainties

    PubMed Central

    Huang, Shouren; Bergström, Niklas; Yamakawa, Yuji; Senoo, Taku; Ishikawa, Masatoshi

    2016-01-01

    It is traditionally difficult to implement fast and accurate position regulation on an industrial robot in the presence of uncertainties. The uncertain factors can be attributed either to the industrial robot itself (e.g., a mismatch of dynamics, mechanical defects such as backlash, etc.) or to the external environment (e.g., calibration errors, misalignment or perturbations of a workpiece, etc.). This paper proposes a systematic approach to implement high-performance position regulation under uncertainties on a general industrial robot (referred to as the main robot) with minimal or no manual teaching. The method is based on a coarse-to-fine strategy that involves configuring an add-on module for the main robot’s end effector. The add-on module consists of a 1000 Hz vision sensor and a high-speed actuator to compensate for accumulated uncertainties. The main robot only focuses on fast and coarse motion, with its trajectories automatically planned by image information from a static low-cost camera. Fast and accurate peg-and-hole alignment in one dimension was implemented as an application scenario by using a commercial parallel-link robot and an add-on compensation module with one degree of freedom (DoF). Experimental results yielded an almost 100% success rate for fast peg-in-hole manipulation (with regulation accuracy at about 0.1 mm) when the workpiece was randomly placed. PMID:27483274

  5. Reinforcement function design and bias for efficient learning in mobile robots

    SciTech Connect

    Touzet, C.; Santos, J.M.

    1998-06-01

    The main paradigm in sub-symbolic learning robot domain is the reinforcement learning method. Various techniques have been developed to deal with the memorization/generalization problem, demonstrating the superior ability of artificial neural network implementations. In this paper, the authors address the issue of designing the reinforcement so as to optimize the exploration part of the learning. They also present and summarize works relative to the use of bias intended to achieve the effective synthesis of the desired behavior. Demonstrative experiments involving a self-organizing map implementation of the Q-learning and real mobile robots (Nomad 200 and Khepera) in a task of obstacle avoidance behavior synthesis are described. 3 figs., 5 tabs.

  6. Global coverage measurement planning strategies for mobile robots equipped with a remote gas sensor.

    PubMed

    Arain, Muhammad Asif; Trincavelli, Marco; Cirillo, Marcello; Schaffernicht, Erik; Lilienthal, Achim J

    2015-01-01

    The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions. PMID:25803707

  7. Bio-inspired group modeling and analysis for intruder detection in mobile sensor/robotic networks.

    PubMed

    Fu, Bo; Xiao, Yang; Liang, Xiannuan; Philip Chen, C L

    2015-01-01

    Although previous bio-inspired models have concentrated on invertebrates (such as ants), mammals such as primates with higher cognitive function are valuable for modeling the increasingly complex problems in engineering. Understanding primates' social and communication systems, and applying what is learned from them to engineering domains is likely to inspire solutions to a number of problems. This paper presents a novel bio-inspired approach to determine group size by researching and simulating primate society. Group size does matter for both primate society and digital entities. It is difficult to determine how to group mobile sensors/robots that patrol in a large area when many factors are considered such as patrol efficiency, wireless interference, coverage, inter/intragroup communications, etc. This paper presents a simulation-based theoretical study on patrolling strategies for robot groups with the comparison of large and small groups through simulations and theoretical results. PMID:24846688

  8. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  9. Mobile-robot pose estimation and environment mapping using an extended Kalman filter

    NASA Astrophysics Data System (ADS)

    Klančar, Gregor; Teslić, Luka; Škrjanc, Igor

    2014-12-01

    In this paper an extended Kalman filter (EKF) is used in the simultaneous localisation and mapping (SLAM) of a four-wheeled mobile robot in an indoor environment. The robot's pose and environment map are estimated from incremental encoders and from laser-range-finder (LRF) sensor readings. The map of the environment consists of line segments, which are estimated from the LRF's scans. A good state convergence of the EKF is obtained using the proposed methods for the input- and output-noise covariance matrices' estimation. The output-noise covariance matrix, consisting of the observed-line-features' covariances, is estimated from the LRF's measurements using the least-squares method. The experimental results from the localisation and SLAM experiments in the indoor environment show the applicability of the proposed approach. The main paper contribution is the improvement of the SLAM algorithm convergence due to the noise covariance matrices' estimation.

  10. Path planning for mobile robots based on visibility graphs and A* algorithm

    NASA Astrophysics Data System (ADS)

    Contreras, Juan D.; Martínez S., Fernando; Martínez S., Fredy H.

    2015-07-01

    One of most worked issues in the last years in robotics has been the study of strategies to path planning for mobile robots in static and observable conditions. This is an open problem without pre-defined rules (non-heuristic), which needs to measure the state of the environment, finds useful information, and uses an algorithm to select the best path. This paper proposes a simple and efficient geometric path planning strategy supported in digital image processing. The image of the environment is processed in order to identify obstacles, and thus the free space for navigation. Then, using visibility graphs, the possible navigation paths guided by the vertices of obstacles are produced. Finally the A* algorithm is used to find a best possible path. The alternative proposed is evaluated by simulation on a large set of test environments, showing in all cases its ability to find a free collision plausible path.

  11. Pure-Pursuit Reactive Path Tracking for Nonholonomic Mobile Robots with a 2D Laser Scanner

    NASA Astrophysics Data System (ADS)

    Morales, Jesús; Martínez, Jorge L.; Martínez, María A.; Mandow, Anthony

    2009-12-01

    Due to its simplicity and efficiency, the pure-pursuit path tracking method has been widely employed for planned navigation of nonholonomic ground vehicles. In this paper, we investigate the application of this technique for reactive tracking of paths that are implicitly defined by perceived environmental features. Goal points are obtained through an efficient interpretation of range data from an onboard 2D laser scanner to follow persons, corridors, and walls. Moreover, this formulation allows that a robotic mission can be composed of a combination of different types of path segments. These techniques have been successfully tested in the tracked mobile robot Auriga-[InlineEquation not available: see fulltext.] in an indoor environment.

  12. Bio-inspired group modeling and analysis for intruder detection in mobile sensor/robotic networks.

    PubMed

    Fu, Bo; Xiao, Yang; Liang, Xiannuan; Philip Chen, C L

    2015-01-01

    Although previous bio-inspired models have concentrated on invertebrates (such as ants), mammals such as primates with higher cognitive function are valuable for modeling the increasingly complex problems in engineering. Understanding primates' social and communication systems, and applying what is learned from them to engineering domains is likely to inspire solutions to a number of problems. This paper presents a novel bio-inspired approach to determine group size by researching and simulating primate society. Group size does matter for both primate society and digital entities. It is difficult to determine how to group mobile sensors/robots that patrol in a large area when many factors are considered such as patrol efficiency, wireless interference, coverage, inter/intragroup communications, etc. This paper presents a simulation-based theoretical study on patrolling strategies for robot groups with the comparison of large and small groups through simulations and theoretical results.

  13. Global Coverage Measurement Planning Strategies for Mobile Robots Equipped with a Remote Gas Sensor

    PubMed Central

    Arain, Muhammad Asif; Trincavelli, Marco; Cirillo, Marcello; Schaffernicht, Erik; Lilienthal, Achim J.

    2015-01-01

    The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions. PMID:25803707

  14. Sensor integation using concurrent computing on-board the ORNL mobile robot

    SciTech Connect

    Mann, R.C.; Jones, J.P.; Beckerman, M.; Glover, C.W.; Farkas, L.; Einstein, J.R.

    1989-01-01

    The mobile robot prototypes developed at the Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) are equipped with sonar sensors, CCD cameras and a laser range camera that are used to support autonomous navigation and inspection tasks in an a priori unknown and unstructured dynamic environment. This paper summarizes work directed at extracting information from data collected with these sensors and integrating it, in order to produce reliable descriptions of the robot's environment. The approach consists in studying different world models and mappings among them, sensor models and parallel algorithms for sensor information processing, and appropriate integration strategies. Specifically, the paper describes the integration of two-dimensional vision and sonar range information, and the integration of laser range and luminance images. 16 refs., 3 figs.

  15. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    PubMed Central

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. PMID:22319297

  16. Lane identification and path planning for autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    McKeon, Robert T.; Paulik, Mark; Krishnan, Mohan

    2006-10-01

    This work has been performed in conjunction with the University of Detroit Mercy's (UDM) ECE Department autonomous vehicle entry in the 2006 Intelligent Ground Vehicle Competition (www.igvc.org). The IGVC challenges engineering students to design autonomous vehicles and compete in a variety of unmanned mobility competitions. The course to be traversed in the competition consists of a lane demarcated by painted lines on grass with the possibility of one of the two lines being deliberately left out over segments of the course. The course also consists of other challenging artifacts such as sandpits, ramps, potholes, and colored tarps that alter the color composition of scenes, and obstacles set up using orange and white construction barrels. This paper describes a composite lane edge detection approach that uses three algorithms to implement noise filters enabling increased removal of noise prior to the application of image thresholding. The first algorithm uses a row-adaptive statistical filter to establish an intensity floor followed by a global threshold based on a reverse cumulative intensity histogram and a priori knowledge about lane thickness and separation. The second method first improves the contrast of the image by implementing an arithmetic combination of the blue plane (RGB format) and a modified saturation plane (HSI format). A global threshold is then applied based on the mean of the intensity image and a user-defined offset. The third method applies the horizontal component of the Sobel mask to a modified gray scale of the image, followed by a thresholding method similar to the one used in the second method. The Hough transform is applied to each of the resulting binary images to select the most probable line candidates. Finally, a heuristics-based confidence interval is determined, and the results sent on to a separate fuzzy polar-based navigation algorithm, which fuses the image data with that produced by a laser scanner (for obstacle detection).

  17. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  18. Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    PubMed Central

    Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  19. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

  20. Fast online learning of control regime transitions for adaptive robotic mobility

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian

    2012-06-01

    We introduce a new framework, Model Transition Control (MTC), that models robot control problems as sets of linear control regimes linked by nonlinear transitions, and a new learning algorithm, Dynamic Threshold Learning (DTL), that learns the boundaries of these control regimes in real-time. We demonstrate that DTL can learn to prevent understeer and oversteer while controlling a simulated high-speed vehicle. We also show that DTL can enable an iRobot PackBot to avoid rollover in rough terrain and to actively shift its center-of-gravity to maintain balance when climbing obstacles. In all cases, DTL is able to learn control regime boundaries in a few minutes, often with single-digit numbers of learning trials.

  1. Development and training of a learning expert system in an autonomous mobile robot via simulation

    SciTech Connect

    Spelt, P.F.; Lyness, E.; DeSaussure, G. . Center for Engineering Systems Advanced Research)

    1989-11-01

    The Center for Engineering Systems Advanced Research (CESAR) conducts basic research in the area of intelligent machines. Recently at CESAR a learning expert system was created to operate on board an autonomous robot working at a process control panel. The authors discuss two-computer simulation system used to create, evaluate and train this learning system. The simulation system has a graphics display of the current status of the process being simulated, and the same program which does the simulating also drives the actual control panel. Simulation results were validated on the actual robot. The speed and safety values of using a computerized simulator to train a learning computer, and future uses of the simulation system, are discussed.

  2. Mobile sailing robot for automatic estimation of fish density and monitoring water quality

    PubMed Central

    2013-01-01

    Introduction The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. Material and method The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Results Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Summary Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health. PMID:23815984

  3. A brain-machine interface to navigate a mobile robot in a planar workspace: enabling humans to fly simulated aircraft with EEG.

    PubMed

    Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy

    2013-03-01

    This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.

  4. A brain-machine interface to navigate a mobile robot in a planar workspace: enabling humans to fly simulated aircraft with EEG.

    PubMed

    Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy

    2013-03-01

    This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals. PMID:23268384

  5. Remotely controlling of mobile robots using gesture captured by the Kinect and recognized by machine learning method

    NASA Astrophysics Data System (ADS)

    Hsu, Roy CHaoming; Jian, Jhih-Wei; Lin, Chih-Chuan; Lai, Chien-Hung; Liu, Cheng-Ting

    2013-01-01

    The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.

  6. Framework for the implementation of vision-based fuzzy logic navigational algorithms for a mobile robot

    NASA Astrophysics Data System (ADS)

    Akec, John A.; Steiner, Simon J.

    1996-10-01

    Fuzzy logic has been promoted recently by many researchers for the design of navigational algorithms for mobile robots. The new approach fits in well with a behavior-based autonomous systems framework, where common-sense rules can naturally be formulated to create rule-based navigational algorithms, and conflicts between behaviors may be resolved by assigning weights to different rules in the rule base. The applicability of the techniques has been demonstrated for robots that have used sensor devices such as ultrasonics and infrared detectors. However, the implementation issues relating to the development of vision-based, fuzzy-logic navigation algorithms do not appear, as yet, to have been fully explored. The salient features that need to be extracted from an image for recognition or collision avoidance purposes are very much application dependent; however, the needs of an autonomous mobile vehicle cannot be known fully 'a priori'. Similarly, the issues relating to the understanding of a vision generated image which is based on geometric models of the observed objects have an important role to play; however, these issues have not as yet been either addressed or incorporated into the current fuzzy logic-based algorithms that have been purported for navigational control. This paper attempts to address these issues, and attempts to come up with a suitable framework which may clarify the implementation of navigation algorithms for mobile robots that use vision sensor/s and fuzzy logic for map building, target location, and collision avoidance. The scope for application of this approach is demonstrated.

  7. The smart IV stand design through human tracking mobile robot system by CDS cell

    NASA Astrophysics Data System (ADS)

    Jo, Seong-Hyeon; Choe, Jong-Hun; Seo, Suk-Hyun; Kim, Won-Hoe; Lee, Hong-Kyu; Park, Se-Ho

    2015-03-01

    Vision-based recognition of the object as a general interface gives us high cost and complicated problem. This research suggests human tracking system by Arduino, and Laser-CdS cell system track wire that pass laser line. In this paper, we review existing literature on application systems of recognition which involves many interdisciplinary studies. We conclude that our method can only reduce cost, but is easy way to trace people's location with the use of wire. Furthermore, we apply several recognition systems including CdS-based mobile robot that is applied IV stand used at the hospital effectively.

  8. Mobile Robots for Localizing Gas Emission Sources on Landfill Sites: Is Bio-Inspiration the Way to Go?

    PubMed Central

    Hernandez Bennetts, Victor; Lilienthal, Achim J.; Neumann, Patrick P.; Trincavelli, Marco

    2011-01-01

    Roboticists often take inspiration from animals for designing sensors, actuators, or algorithms that control the behavior of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odor plume. In particular the task of tracking an odor plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behavior of moths, dung beetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro-drone) in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behavior of animals, can be usefully “translated” into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically inspired algorithms. PMID:22319493

  9. Mobile robots for localizing gas emission sources on landfill sites: is bio-inspiration the way to go?

    PubMed

    Hernandez Bennetts, Victor; Lilienthal, Achim J; Neumann, Patrick P; Trincavelli, Marco

    2011-01-01

    Roboticists often take inspiration from animals for designing sensors, actuators, or algorithms that control the behavior of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odor plume. In particular the task of tracking an odor plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behavior of moths, dung beetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro-drone) in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behavior of animals, can be usefully "translated" into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically inspired algorithms.

  10. A PIC microcontroller-based system for real-life interfacing of external peripherals with a mobile robot

    NASA Astrophysics Data System (ADS)

    Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan

    2010-02-01

    The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.

  11. Fuzzy Mobile-Robot Positioning in Intelligent Spaces Using Wireless Sensor Networks

    PubMed Central

    Herrero, David; Martínez, Humberto

    2011-01-01

    This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using Wireless Sensor Networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods. PMID:22346673

  12. Coordinated Control of Slip Ratio for Wheeled Mobile Robots Climbing Loose Sloped Terrain

    PubMed Central

    Li, Zhengcai; Wang, Yang

    2014-01-01

    A challenging problem faced by wheeled mobile robots (WMRs) such as planetary rovers traversing loose sloped terrain is the inevitable longitudinal slip suffered by the wheels, which often leads to their deviation from the predetermined trajectory, reduced drive efficiency, and possible failures. This study investigates this problem using terramechanics analysis of the wheel-soil interaction. First, a slope-based wheel-soil interaction terramechanics model is built, and an online slip coordinated algorithm is designed based on the goal of optimal drive efficiency. An equation of state is established using the coordinated slip as the desired input and the actual slip as a state variable. To improve the robustness and adaptability of the control system, an adaptive neural network is designed. Analytical results and those of a simulation using Vortex demonstrate the significantly improved mobile performance of the WMR using the proposed control system. PMID:25276849

  13. Image processing for navigation on a mobile embedded platform: design of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Loose, Harald; Lemke, Christiane; Papazov, Chavdar

    2006-02-01

    This paper deals with intelligent mobile platforms connected to a camera controlled by a small hardware-platform called RCUBE. This platform is able to provide features of a typical actuator-sensor board with various inputs and outputs as well as computing power and image recognition capabilities. Several intelligent autonomous RCBUE devices can be equipped and programmed to participate in the BOSPORUS network. These components form an intelligent network for gathering sensor and image data, sensor data fusion, navigation and control of mobile platforms. The RCUBE platform provides a standalone solution for image processing, which will be explained and presented. It plays a major role for several components in a reference implementation of the BOSPORUS system. On the one hand, intelligent cameras will be positioned in the environment, analyzing the events from a fixed point of view and sharing their perceptions with other components in the system. On the other hand, image processing results will contribute to a reliable navigation of a mobile system, which is crucially important. Fixed landmarks and other objects appropriate for determining the position of a mobile system can be recognized. For navigation other methods are added, i.e. GPS calculations and odometers.

  14. Floor Covering and Surface Identification for Assistive Mobile Robotic Real-Time Room Localization Application

    PubMed Central

    Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben

    2013-01-01

    Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification. PMID:24351647

  15. Floor covering and surface identification for assistive mobile robotic real-time room localization application.

    PubMed

    Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben

    2013-01-01

    Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification. PMID:24351647

  16. Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI

    PubMed Central

    2016-01-01

    Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system. PMID:27528864

  17. Usability testing of a mobile robotic system for in-home telerehabilitation.

    PubMed

    Boissy, Patrick; Brière, Simon; Corriveau, Hélène; Grant, Andrew; Lauria, Michel; Michaud, François

    2011-01-01

    Mobile robots designed to enhance telepresence in the support of telehealth services are being considered for numerous applications. TELEROBOT is a teleoperated mobile robotic platform equipped with videoconferencingcapabilities and designed to be used in a home environment to. In this study, learnability of the system's teleoperation interface and controls was evaluated with ten rehabilitation professionals during four training sessions in a laboratory environment and in an unknown home environment while performing the execution of a standardized evaluation protocol typically used in home care. Results show that the novice teleoperators' performances on two of the four metrics used (number of command and total time) improved significantly across training sessions (ANOVAS, p<0.05) and that performance in these metrics in the last training session reflected teleoperation abilities seen in the unknown home environment during navigation tasks (r=0,77 and 0,60). With only 4 hours of training, rehabilitation professionals were able learn to teleoperate successfully TELEROBOT. However teleoperation performances remained significantly less efficient then those of an expert. Under the home task condition (navigating the home environment from one point to the other as fast as possible) this translated to completion time between 350 seconds (best performance) and 850 seconds (worse performance). Improvements in other usability aspects of the system will be needed to meet the requirements of in-home telerehabilitation.

  18. Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI.

    PubMed

    Stawicki, Piotr; Gembler, Felix; Volosyak, Ivan

    2016-01-01

    Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system.

  19. Using a LRF sensor in the Kalman-filtering-based localization of a mobile robot.

    PubMed

    Teslić, Luka; Skrjanc, Igor; Klancar, Gregor

    2010-01-01

    This paper deals with the problem of estimating the output-noise covariance matrix that is involved in the localization of a mobile robot. The extended Kalman filter (EKF) is used to localize the mobile robot with a laser range finder (LRF) sensor in an environment described with line segments. The covariances of the observed environment lines, which compose the output-noise covariance matrix in the correction step of the EKF, are the result of the noise arising from a range-sensor's (e.g., a LRF) distance and angle measurements. A method for estimating the covariances of the line parameters based on classic least squares (LSQ) is proposed. This method is compared with the method resulting from the orthogonal LSQ in terms of computational complexity. The results of a comparison show that the use of classic LSQ instead of orthogonal LSQ reduce the number of computations in a localization algorithm which is a part of a SLAM (simultaneous localization and mapping) algorithm. Statistical accuracy of both methods is also compared by simulating the LRF's measurements and the comparison proves the efficiency of the proposed approach.

  20. Using a LRF sensor in the Kalman-filtering-based localization of a mobile robot.

    PubMed

    Teslić, Luka; Skrjanc, Igor; Klancar, Gregor

    2010-01-01

    This paper deals with the problem of estimating the output-noise covariance matrix that is involved in the localization of a mobile robot. The extended Kalman filter (EKF) is used to localize the mobile robot with a laser range finder (LRF) sensor in an environment described with line segments. The covariances of the observed environment lines, which compose the output-noise covariance matrix in the correction step of the EKF, are the result of the noise arising from a range-sensor's (e.g., a LRF) distance and angle measurements. A method for estimating the covariances of the line parameters based on classic least squares (LSQ) is proposed. This method is compared with the method resulting from the orthogonal LSQ in terms of computational complexity. The results of a comparison show that the use of classic LSQ instead of orthogonal LSQ reduce the number of computations in a localization algorithm which is a part of a SLAM (simultaneous localization and mapping) algorithm. Statistical accuracy of both methods is also compared by simulating the LRF's measurements and the comparison proves the efficiency of the proposed approach. PMID:19828146

  1. Human-Centered Design and Evaluation of Haptic Cueing for Teleoperation of Multiple Mobile Robots.

    PubMed

    Son, Hyoung Il; Franchi, Antonio; Chuang, Lewis L; Kim, Junsuk; Bulthoff, Heinrich H; Giordano, Paolo Robuffo

    2013-04-01

    In this paper, we investigate the effect of haptic cueing on a human operator's performance in the field of bilateral teleoperation of multiple mobile robots, particularly multiple unmanned aerial vehicles (UAVs). Two aspects of human performance are deemed important in this area, namely, the maneuverability of mobile robots and the perceptual sensitivity of the remote environment. We introduce metrics that allow us to address these aspects in two psychophysical studies, which are reported here. Three fundamental haptic cue types were evaluated. The Force cue conveys information on the proximity of the commanded trajectory to obstacles in the remote environment. The Velocity cue represents the mismatch between the commanded and actual velocities of the UAVs and can implicitly provide a rich amount of information regarding the actual behavior of the UAVs. Finally, the Velocity+Force cue is a linear combination of the two. Our experimental results show that, while maneuverability is best supported by the Force cue feedback, perceptual sensitivity is best served by the Velocity cue feedback. In addition, we show that large gains in the haptic feedbacks do not always guarantee an enhancement in the teleoperator's performance.

  2. Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI.

    PubMed

    Stawicki, Piotr; Gembler, Felix; Volosyak, Ivan

    2016-01-01

    Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system. PMID:27528864

  3. Recognition of 3D objects for autonomous mobile robot's navigation in automated shipbuilding

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Cho, Hyungsuck

    2007-10-01

    Nowadays many parts of shipbuilding process are automated, but the painting process is not, because of the difficulty of automated on-line painting quality measurement, harsh painting environment and the difficulty of robot navigation. However, the painting automation is necessary, because it can provide consistent performance of painting film thickness. Furthermore, autonomous mobile robots are strongly required for flexible painting work. However, the main problem of autonomous mobile robot's navigation is that there are many obstacles which are not expressed in the CAD data. To overcome this problem, obstacle detection and recognition are necessary to avoid obstacles and painting work effectively. Until now many object recognition algorithms have been studied, especially 2D object recognition methods using intensity image have been widely studied. However, in our case environmental illumination does not exist, so these methods cannot be used. To overcome this, to use 3D range data must be used, but the problem of using 3D range data is high computational cost and long estimation time of recognition due to huge data base. In this paper, we propose a 3D object recognition algorithm based on PCA (Principle Component Analysis) and NN (Neural Network). In the algorithm, the novelty is that the measured 3D range data is transformed into intensity information, and then adopts the PCA and NN algorithm for transformed intensity information to reduce the processing time and make the data easy to handle which are disadvantages of previous researches of 3D object recognition. A set of experimental results are shown to verify the effectiveness of the proposed algorithm.

  4. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  5. Conversion and control of an all-terrain vehicle for use as an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Jacob, John S.; Gunderson, Robert W.; Fullmer, R. R.

    1998-08-01

    A systematic approach to ground vehicle automation is presented, combining low-level controls, trajectory generation and closed-loop path correction in an integrated system. Development of cooperative robotics for precision agriculture at Utah State University required the automation of a full-scale motorized vehicle. The Triton Predator 8- wheeled skid-steering all-terrain vehicle was selected for the project based on its ability to maneuver precisely and the simplicity of controlling the hydrostatic drivetrain. Low-level control was achieved by fitting an actuator on the engine throttle, actuators for the left and right drive controls, encoders on the left and right drive shafts to measure wheel speeds, and a signal pick-off on the alternator for measuring engine speed. Closed loop control maintains a desired engine speed and tracks left and right wheel speeds commands. A trajectory generator produces the wheel speed commands needed to steer the vehicle through a predetermined set of map coordinates. A planar trajectory through the points is computed by fitting a 2D cubic spline over each path segment while enforcing initial and final orientation constraints at segment endpoints. Acceleration and velocity profiles are computed for each trajectory segment, with the velocity over each segment dependent on turning radius. Left and right wheel speed setpoints are obtained by combining velocity and path curvature for each low-level timestep. The path correction algorithm uses GPS position and compass orientation information to adjust the wheel speed setpoints according to the 'crosstrack' and 'downtrack' errors and heading error. Nonlinear models of the engine and the skid-steering vehicle/ground interaction were developed for testing the integrated system in simulation. These test lead to several key design improvements which assisted final implementation on the vehicle.

  6. A High Speed Mobile Courier Data Access System That Processes Database Queries in Real-Time

    NASA Astrophysics Data System (ADS)

    Gatsheni, Barnabas Ndlovu; Mabizela, Zwelakhe

    A secure high-speed query processing mobile courier data access (MCDA) system for a Courier Company has been developed. This system uses the wireless networks in combination with wired networks for updating a live database at the courier centre in real-time by an offsite worker (the Courier). The system is protected by VPN based on IPsec. There is no system that we know of to date that performs the task for the courier as proposed in this paper.

  7. Are mobile speed cameras effective? A controlled before and after study

    PubMed Central

    Christie, S; Lyons, R; Dunstan, F; Jones, S

    2003-01-01

    Objective: To identify the most appropriate metric to determine the effectiveness of mobile speed cameras in reducing road traffic related injuries. Design: Controlled before and after study which compares two methods for examining the local effectiveness of mobile speed cameras—a circular zone around the camera and a route based method to define exposure at various distances from sites. Setting: South Wales, UK. Subjects: Persons injured by road traffic before and after intervention. Intervention: Use of mobile speed cameras at 101 sites. Main outcome measures: Rate ratio of injurious crashes at intervention and control sites. Results: Camera sites had lower than expected numbers of injurious crashes up to 300 metres using circles and up to 500 metres using routes. Routes methods indicated a larger effect than the circles method except in the 100 metres nearest sites. A 500 metre route method was used to investigate the effect within strata of time after intervention, time of day, speed limit, and type of road user injured. The number of injurious crashes after intervention was substantially reduced (rate ratio 0.49, 95% confidence interval 0.42 to 0.57) and sustained throughout two years after intervention. Significant decreases occurred in daytime and night time, on roads with speed limits of 30 and 60–70 miles/hour and for crashes that injured pedestrians, motorcycle users, and car occupants. Conclusions: The route based method is the better method of measure effectiveness at distances up to 500 metres. This method demonstrates a 51% reduction in injurious crashes. PMID:14693888

  8. Hydrodynamics of a robotic fish tail: effects of the caudal peduncle, fin ray motions and the flow speed.

    PubMed

    Ren, Ziyu; Yang, Xingbang; Wang, Tianmiao; Wen, Li

    2016-02-01

    Recent advances in understanding fish locomotion with robotic devices have included the use of biomimetic flapping based and fin undulatory locomotion based robots, treating two locomotions separately from each other. However, in most fish species, patterns of active movements of fins occur in concert with the body undulatory deformation during swimming. In this paper, we describe a biomimetic robotic caudal fin programmed with individually actuated fin rays to mimic the fin motion of the Bluegill Sunfish (Lepomis macrochirus) and coupled with heave and pitch oscillatory motions adding to the robot to mimic the peduncle motion which is derived from the undulatory fish body. Multiple-axis force and digital particle image velocimetry (DPIV) experiments from both the vertical and horizontal planes behind the robotic model were conducted under different motion programs and flow speeds. We found that both mean thrust and lift could be altered by changing the phase difference (φ) from 0° to 360° between the robotic caudal peduncle and the fin ray motion (spanning from 3 mN to 124 mN). Notably, DPIV results demonstrated that the caudal fin generated multiple wake flow patterns in both the vertical and horizontal planes by varying φ. Vortex jet angle and thrust impulse also varied significantly both in these two planes. In addition, the vortex shedding position along the spanwise tail direction could be shifted around the mid-sagittal position between the upper and lower lobes by changing the phase difference. We hypothesize that the fish caudal fin may serve as a flexible vectoring propeller during swimming and may be critical for the high maneuverability of fish. PMID:26855405

  9. Hydrodynamics of a robotic fish tail: effects of the caudal peduncle, fin ray motions and the flow speed.

    PubMed

    Ren, Ziyu; Yang, Xingbang; Wang, Tianmiao; Wen, Li

    2016-02-01

    Recent advances in understanding fish locomotion with robotic devices have included the use of biomimetic flapping based and fin undulatory locomotion based robots, treating two locomotions separately from each other. However, in most fish species, patterns of active movements of fins occur in concert with the body undulatory deformation during swimming. In this paper, we describe a biomimetic robotic caudal fin programmed with individually actuated fin rays to mimic the fin motion of the Bluegill Sunfish (Lepomis macrochirus) and coupled with heave and pitch oscillatory motions adding to the robot to mimic the peduncle motion which is derived from the undulatory fish body. Multiple-axis force and digital particle image velocimetry (DPIV) experiments from both the vertical and horizontal planes behind the robotic model were conducted under different motion programs and flow speeds. We found that both mean thrust and lift could be altered by changing the phase difference (φ) from 0° to 360° between the robotic caudal peduncle and the fin ray motion (spanning from 3 mN to 124 mN). Notably, DPIV results demonstrated that the caudal fin generated multiple wake flow patterns in both the vertical and horizontal planes by varying φ. Vortex jet angle and thrust impulse also varied significantly both in these two planes. In addition, the vortex shedding position along the spanwise tail direction could be shifted around the mid-sagittal position between the upper and lower lobes by changing the phase difference. We hypothesize that the fish caudal fin may serve as a flexible vectoring propeller during swimming and may be critical for the high maneuverability of fish.

  10. Lightweight robotic mobility: template-based modeling for dynamics and controls using ADAMS/car and MATLAB

    NASA Astrophysics Data System (ADS)

    Adamczyk, Peter G.; Gorsich, David J.; Hudas, Greg R.; Overholt, James

    2003-09-01

    The U.S. Army is seeking to develop autonomous off-road mobile robots to perform tasks in the field such as supply delivery and reconnaissance in dangerous territory. A key problem to be solved with these robots is off-road mobility, to ensure that the robots can accomplish their tasks without loss or damage. We have developed a computer model of one such concept robot, the small-scale "T-1" omnidirectional vehicle (ODV), to study the effects of different control strategies on the robot's mobility in off-road settings. We built the dynamic model in ADAMS/Car and the control system in Matlab/Simulink. This paper presents the template-based method used to construct the ADAMS model of the T-1 ODV. It discusses the strengths and weaknesses of ADAMS/Car software in such an application, and describes the benefits and challenges of the approach as a whole. The paper also addresses effective linking of ADAMS/Car and Matlab for complete control system development. Finally, this paper includes a section describing the extension of the T-1 templates to other similar ODV concepts for rapid development.

  11. Odour-tracking capability of a silkmoth driving a mobile robot with turning bias and time delay.

    PubMed

    Ando, N; Emoto, S; Kanzaki, R

    2013-03-01

    The reconstruction of mechanisms behind odour-tracking behaviours of animals is expected to enable the development of biomimetic robots capable of adaptive behaviour and effectively locating odour sources. However, because the behavioural mechanisms of animals have not been extensively studied, their behavioural capabilities cannot be verified. In this study, we have employed a mobile robot driven by a genuine insect (insect-controlled robot) to evaluate the behavioural capabilities of a biological system implemented in an artificial system. We used a male silkmoth as the 'driver' and investigated its behavioural capabilities to imposed perturbations during odour tracking. When we manipulated the robot to induce the turning bias, it located the odour source by compensatory turning of the on-board moth. Shifting of the orientation paths to the odour plume boundaries and decreased orientation ability caused by covering the visual field suggested that the moth steered with bilateral olfaction and vision to overcome the bias. An evaluation of the time delays of the moth and robot movements suggested an acceptable range for sensory-motor processing when the insect system was directly applied to artificial systems. Further evaluations of the insect-controlled robot will provide a 'blueprint' for biomimetic robots and strongly promote the field of biomimetics. PMID:23385386

  12. Odour-tracking capability of a silkmoth driving a mobile robot with turning bias and time delay.

    PubMed

    Ando, N; Emoto, S; Kanzaki, R

    2013-03-01

    The reconstruction of mechanisms behind odour-tracking behaviours of animals is expected to enable the development of biomimetic robots capable of adaptive behaviour and effectively locating odour sources. However, because the behavioural mechanisms of animals have not been extensively studied, their behavioural capabilities cannot be verified. In this study, we have employed a mobile robot driven by a genuine insect (insect-controlled robot) to evaluate the behavioural capabilities of a biological system implemented in an artificial system. We used a male silkmoth as the 'driver' and investigated its behavioural capabilities to imposed perturbations during odour tracking. When we manipulated the robot to induce the turning bias, it located the odour source by compensatory turning of the on-board moth. Shifting of the orientation paths to the odour plume boundaries and decreased orientation ability caused by covering the visual field suggested that the moth steered with bilateral olfaction and vision to overcome the bias. An evaluation of the time delays of the moth and robot movements suggested an acceptable range for sensory-motor processing when the insect system was directly applied to artificial systems. Further evaluations of the insect-controlled robot will provide a 'blueprint' for biomimetic robots and strongly promote the field of biomimetics.

  13. Modelling and precision of the localization of the robotic mobile platforms for constructions with laser tracker and SmartTrack sensor

    NASA Astrophysics Data System (ADS)

    Dima, M.; Francu, C.

    2016-08-01

    This paper presents a way to expand the field of use of the laser tracker and SmartTrack sensor localization device used in lately for the localisation of the end effector of the industrial robots to the localization of the mobile construction robots. The research paper presents the equipment along with its characteristics, determines the relationships for the localization coordinates by comparison to the forward kinematics of the industrial robot's spherical arm (positioning mechanism in spherical coordinates) and the orientation mechanism with three revolute axes. In the end of the paper the accuracy of the mobile robot's localization is analysed.

  14. Reasoning and planning in dynamic domains: An experiment with a mobile robot

    NASA Technical Reports Server (NTRS)

    Georgeff, M. P.; Lansky, A. L.; Schoppers, M. J.

    1987-01-01

    Progress made toward having an autonomous mobile robot reason and plan complex tasks in real-world environments is described. To cope with the dynamic and uncertain nature of the world, researchers use a highly reactive system to which is attributed attitudes of belief, desire, and intention. Because these attitudes are explicitly represented, they can be manipulated and reasoned about, resulting in complex goal-directed and reflective behaviors. Unlike most planning systems, the plans or intentions formed by the system need only be partly elaborated before it decides to act. This allows the system to avoid overly strong expectations about the environment, overly constrained plans of action, and other forms of over-commitment common to previous planners. In addition, the system is continuously reactive and has the ability to change its goals and intentions as situations warrant. Thus, while the system architecture allows for reasoning about means and ends in much the same way as traditional planners, it also posseses the reactivity required for survival in complex real-world domains. The system was tested using SRI's autonomous robot (Flakey) in a scenario involving navigation and the performance of an emergency task in a space station scenario.

  15. On learning navigation behaviors for small mobile robots with reservoir computing architectures.

    PubMed

    Antonelo, Eric Aislan; Schrauwen, Benjamin

    2015-04-01

    This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal. PMID:25794381

  16. Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan

    PubMed Central

    Duan, Zhuohua; Cai, Zixing; Min, Huaqing

    2014-01-01

    Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318

  17. Pseudolinear Model Based Solution to the SLAM Problem of Nonholonomic Mobile Robots

    NASA Astrophysics Data System (ADS)

    Pathiranage, Chandima Dedduwa; Watanabe, Keigo; Izumi, Kiyotaka

    This paper describes an improved solution to the simultaneous localization and mapping (SLAM) problem based on pseudolinear models. Accurate estimation of vehicle and landmark states is one of the key issues for successful mobile robot navigation if the configuration of the environment and initial robot location are unknown. A state estimator which can be designed to use the nonlinearity as it is coming from the original model has always been invaluable in which high accuracy is expected. Thus to accomplish the above highlighted point, pseudolinear model based Kalman filter (PLKF) state estimator is introduced. A less error prone vehicle process model is proposed to improve the accuracy and the faster convergence of state estimation. Evolution of vehicle motion is modeled using vehicle frame translation derived from successive dead reckoned poses as a control input. A measurement model with two sensor frames is proposed to improve the data association. The PLKF-based SLAM algorithm is simulated using Matlab for vehicle-landmarks system and results show that the proposed approach performs much accurately compared to the well known extended Kalman filter (EKF).

  18. Robust dead reckoning system for mobile robots based on particle filter and raw range scan.

    PubMed

    Duan, Zhuohua; Cai, Zixing; Min, Huaqing

    2014-09-04

    Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method.

  19. SAL: a language for developing an agent-based architecture for mobile robots

    NASA Astrophysics Data System (ADS)

    Lim, Willie Y.; Verzulli, Joe

    1993-05-01

    SAL (the SmartyCat Agent Language) is a language being developed for programming SmartyCat, our mobile robot. SmartyCat's underlying software architecture is agent-based. At the lowest level, the robot sensors and actuators are controlled by agents (viz., the sensing and acting agents, respectively). SAL provides the constructs for organizing these agents into many structures. In particular, SAL supports the subsumption architecture approach. At higher levels of abstraction, SAL can be used for writing programs based on Minsky's Society of Mind paradigm. Structurally, a SAL program is a graph, where the nodes are software modules called agents, and the arcs represent abstract communication links between agents. In SAL, an agent is a CLOS object with input and output ports. Input ports are used for presenting data from the outside world (i.e., other agents) to the agent. Data are presented to the outside world by the agent through its output ports. The main body of the SAL code for the agent specifies the computation or the action performed by the agent. This paper describes how SAL is being used for implementing the agent-based SmartyCat software architecture on a Cybermotion K2A platform.

  20. On learning navigation behaviors for small mobile robots with reservoir computing architectures.

    PubMed

    Antonelo, Eric Aislan; Schrauwen, Benjamin

    2015-04-01

    This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal.