Sample records for autonomous mobile robot

  1. Cooperative Autonomous Robots for Reconnaissance

    DTIC Science & Technology

    2009-03-06

    REPORT Cooperative Autonomous Robots for Reconnaissance 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Collaborating mobile robots equipped with WiFi ...Cooperative Autonomous Robots for Reconnaissance Report Title ABSTRACT Collaborating mobile robots equipped with WiFi transceivers are configured as a mobile...equipped with WiFi transceivers are configured as a mobile ad-hoc network. Algorithms are developed to take advantage of the distributed processing

  2. Long-Term Simultaneous Localization and Mapping in Dynamic Environments

    DTIC Science & Technology

    2015-01-01

    core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the...and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot???s sensory...distributed stochastic neighbor embedding x ABSTRACT One of the core competencies required for autonomous mobile robotics is the ability to use sensors

  3. [Mobile autonomous robots-Possibilities and limits].

    PubMed

    Maehle, E; Brockmann, W; Walthelm, A

    2002-02-01

    Besides industrial robots, which today are firmly established in production processes, service robots are becoming more and more important. They shall provide services for humans in different areas of their professional and everyday environment including medicine. Most of these service robots are mobile which requires an intelligent autonomous behaviour. After characterising the different kinds of robots the relevant paradigms of intelligent autonomous behaviour for mobile robots are critically discussed in this paper and illustrated by three concrete examples of robots realized in Lübeck. In addition a short survey of actual kinds of surgical robots as well as an outlook to future developments is given.

  4. JOMAR: Joint Operations with Mobile Autonomous Robots

    DTIC Science & Technology

    2015-12-21

    AFRL-AFOSR-JP-TR-2015-0009 JOMAR: Joint Operations with Mobile Autonomous Robots Edwin Olson UNIVERSITY OF MICHIGAN Final Report 12/21/2015...SUBTITLE JOMAR: Joint Operations with Mobile Autonomous Robots 5a. CONTRACT NUMBER FA23861114024 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...14. ABSTRACT Under this grant, we formulated and implemented a variety of novel algorithms that address core problems in multi- robot systems. These

  5. Mamdani Fuzzy System for Indoor Autonomous Mobile Robot

    NASA Astrophysics Data System (ADS)

    Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.

    2011-06-01

    Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.

  6. Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.

    1999-08-01

    A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.

  7. Towards Principled Experimental Study of Autonomous Mobile Robots

    NASA Technical Reports Server (NTRS)

    Gat, Erann

    1995-01-01

    We review the current state of research in autonomous mobile robots and conclude that there is an inadequate basis for predicting the reliability and behavior of robots operating in unengineered environments. We present a new approach to the study of autonomous mobile robot performance based on formal statistical analysis of independently reproducible experiments conducted on real robots. Simulators serve as models rather than experimental surrogates. We demonstrate three new results: 1) Two commonly used performance metrics (time and distance) are not as well correlated as is often tacitly assumed. 2) The probability distributions of these performance metrics are exponential rather than normal, and 3) a modular, object-oriented simulation accurately predicts the behavior of the real robot in a statistically significant manner.

  8. Tele-assistance for semi-autonomous robots

    NASA Technical Reports Server (NTRS)

    Rogers, Erika; Murphy, Robin R.

    1994-01-01

    This paper describes a new approach in semi-autonomous mobile robots. In this approach the robot has sufficient computerized intelligence to function autonomously under a certain set of conditions, while the local system is a cooperative decision making unit that combines human and machine intelligence. Communication is then allowed to take place in a common mode and in a common language. A number of exception-handling scenarios that were constructed as a result of experiments with actual sensor data collected from two mobile robots were presented.

  9. Teleautonomous guidance for mobile robots

    NASA Technical Reports Server (NTRS)

    Borenstein, J.; Koren, Y.

    1990-01-01

    Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.

  10. From Autonomous Robots to Artificial Ecosystems

    NASA Astrophysics Data System (ADS)

    Mastrogiovanni, Fulvio; Sgorbissa, Antonio; Zaccaria, Renato

    During the past few years, starting from the two mainstream fields of Ambient Intelligence [2] and Robotics [17], several authors recognized the benefits of the socalled Ubiquitous Robotics paradigm. According to this perspective, mobile robots are no longer autonomous, physically situated and embodied entities adapting themselves to a world taliored for humans: on the contrary, they are able to interact with devices distributed throughout the environment and get across heterogeneous information by means of communication technologies. Information exchange, coupled with simple actuation capabilities, is meant to replace physical interaction between robots and their environment. Two benefits are evident: (i) smart environments overcome inherent limitations of mobile platforms, whereas (ii) mobile robots offer a mobility dimension unknown to smart environments.

  11. SLAM algorithm applied to robotics assistance for navigation in unknown environments.

    PubMed

    Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo

    2010-02-17

    The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.

  12. Adaptive Behavior for Mobile Robots

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2009-01-01

    The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.

  13. Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging

    DTIC Science & Technology

    2016-01-01

    satisfying journeys in my life. I would like to thank Ryan for his guidance through the truly exciting world of mobile robotics and robotic perception. Thank...Multi-session and Multi-robot SLAM . . . . . . . . . . . . . . . 15 1.3.3 Robust Techniques for SLAM Backends . . . . . . . . . . . . . . 18 1.4 A...sonar. xv CHAPTER 1 Introduction 1.1 The Importance of SLAM in Autonomous Robotics Autonomous mobile robots are becoming a promising aid in a wide

  14. Research state-of-the-art of mobile robots in China

    NASA Astrophysics Data System (ADS)

    Wu, Lin; Zhao, Jinglun; Zhang, Peng; Li, Shiqing

    1991-03-01

    Several newly developed mobile robots in china are described in the paper. It includes masterslave telerobot sixleged robot biped walking robot remote inspection robot crawler moving robot and autonomous mobi le vehicle . Some relevant technology are also described.

  15. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    PubMed Central

    2010-01-01

    Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735

  16. Embodied Computation: An Active-Learning Approach to Mobile Robotics Education

    ERIC Educational Resources Information Center

    Riek, L. D.

    2013-01-01

    This paper describes a newly designed upper-level undergraduate and graduate course, Autonomous Mobile Robots. The course employs active, cooperative, problem-based learning and is grounded in the fundamental computational problems in mobile robotics defined by Dudek and Jenkin. Students receive a broad survey of robotics through lectures, weekly…

  17. Autonomous mobile robot teams

    NASA Technical Reports Server (NTRS)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  18. Speed control for a mobile robot

    NASA Astrophysics Data System (ADS)

    Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.

  19. Rice-obot 1: An intelligent autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R.; Ciscon, L.; Berberian, D.

    1989-01-01

    The Rice-obot I is the first in a series of Intelligent Autonomous Mobile Robots (IAMRs) being developed at Rice University's Cooperative Intelligent Mobile Robots (CIMR) lab. The Rice-obot I is mainly designed to be a testbed for various robotic and AI techniques, and a platform for developing intelligent control systems for exploratory robots. Researchers present the need for a generalized environment capable of combining all of the control, sensory and knowledge systems of an IAMR. They introduce Lisp-Nodes as such a system, and develop the basic concepts of nodes, messages and classes. Furthermore, they show how the control system of the Rice-obot I is implemented as sub-systems in Lisp-Nodes.

  20. Dynamic multisensor fusion for mobile robot navigation in an indoor environment

    NASA Astrophysics Data System (ADS)

    Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.

    2001-10-01

    In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.

  1. Semi-autonomous exploration of multi-floor buildings with a legged robot

    NASA Astrophysics Data System (ADS)

    Wenger, Garrett J.; Johnson, Aaron M.; Taylor, Camillo J.; Koditschek, Daniel E.

    2015-05-01

    This paper presents preliminary results of a semi-autonomous building exploration behavior using the hexapedal robot RHex. Stairwells are used in virtually all multi-floor buildings, and so in order for a mobile robot to effectively explore, map, clear, monitor, or patrol such buildings it must be able to ascend and descend stairwells. However most conventional mobile robots based on a wheeled platform are unable to traverse stairwells, motivating use of the more mobile legged machine. This semi-autonomous behavior uses a human driver to provide steering input to the robot, as would be the case in, e.g., a tele-operated building exploration mission. The gait selection and transitions between the walking and stair climbing gaits are entirely autonomous. This implementation uses an RGBD camera for stair acquisition, which offers several advantages over a previously documented detector based on a laser range finder, including significantly reduced acquisition time. The sensor package used here also allows for considerable expansion of this behavior. For example, complete automation of the building exploration task driven by a mapping algorithm and higher level planner is presently under development.

  2. Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.

    PubMed

    Deng, Fucheng; Zhu, Xiaorui; He, Chao

    2017-09-13

    Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.

  3. Intelligent mobility research for robotic locomotion in complex terrain

    NASA Astrophysics Data System (ADS)

    Trentini, Michael; Beckman, Blake; Digney, Bruce; Vincent, Isabelle; Ricard, Benoit

    2006-05-01

    The objective of the Autonomous Intelligent Systems Section of Defence R&D Canada - Suffield is best described by its mission statement, which is "to augment soldiers and combat systems by developing and demonstrating practical, cost effective, autonomous intelligent systems capable of completing military missions in complex operating environments." The mobility requirement for ground-based mobile systems operating in urban settings must increase significantly if robotic technology is to augment human efforts in these roles and environments. The intelligence required for autonomous systems to operate in complex environments demands advances in many fields of robotics. This has resulted in large bodies of research in areas of perception, world representation, and navigation, but the problem of locomotion in complex terrain has largely been ignored. In order to achieve its objective, the Autonomous Intelligent Systems Section is pursuing research that explores the use of intelligent mobility algorithms designed to improve robot mobility. Intelligent mobility uses sensing, control, and learning algorithms to extract measured variables from the world, control vehicle dynamics, and learn by experience. These algorithms seek to exploit available world representations of the environment and the inherent dexterity of the robot to allow the vehicle to interact with its surroundings and produce locomotion in complex terrain. The primary focus of the paper is to present the intelligent mobility research within the framework of the research methodology, plan and direction defined at Defence R&D Canada - Suffield. It discusses the progress and future direction of intelligent mobility research and presents the research tools, topics, and plans to address this critical research gap. This research will create effective intelligence to improve the mobility of ground-based mobile systems operating in urban settings to assist the Canadian Forces in their future urban operations.

  4. Developing operation algorithms for vision subsystems in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  5. Theseus: tethered distributed robotics (TDR)

    NASA Astrophysics Data System (ADS)

    Digney, Bruce L.; Penzes, Steven G.

    2003-09-01

    The Defence Research and Development Canada's (DRDC) Autonomous Intelligent System's program conducts research to increase the independence and effectiveness of military vehicles and systems. DRDC-Suffield's Autonomous Land Systems (ALS) is creating new concept vehicles and autonomous control systems for use in outdoor areas, urban streets, urban interiors and urban subspaces. This paper will first give an overview of the ALS program and then give a specific description of the work being done for mobility in urban subspaces. Discussed will be the Theseus: Thethered Distributed Robotics (TDR) system, which will not only manage an unavoidable tether but exploit it for mobility and navigation. Also discussed will be the prototype robot called the Hedgehog, which uses conformal 3D mobility in ducts, sewer pipes, collapsed rubble voids and chimneys.

  6. Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, W.J.; Chun, W.H.

    1990-01-01

    The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less

  7. Navigation strategies for multiple autonomous mobile robots moving in formation

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1991-01-01

    The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.

  8. Tank-automotive robotics

    NASA Astrophysics Data System (ADS)

    Lane, Gerald R.

    1999-07-01

    To provide an overview of Tank-Automotive Robotics. The briefing will contain program overviews & inter-relationships and technology challenges of TARDEC managed unmanned and robotic ground vehicle programs. Specific emphasis will focus on technology developments/approaches to achieve semi- autonomous operation and inherent chassis mobility features. Programs to be discussed include: DemoIII Experimental Unmanned Vehicle (XUV), Tactical Mobile Robotics (TMR), Intelligent Mobility, Commanders Driver Testbed, Collision Avoidance, International Ground Robotics Competition (ICGRC). Specifically, the paper will discuss unique exterior/outdoor challenges facing the IGRC competing teams and the synergy created between the IGRC and ongoing DoD semi-autonomous Unmanned Ground Vehicle and DoT Intelligent Transportation System programs. Sensor and chassis approaches to meet the IGRC challenges and obstacles will be shown and discussed. Shortfalls in performance to meet the IGRC challenges will be identified.

  9. A development of intelligent entertainment robot for home life

    NASA Astrophysics Data System (ADS)

    Kim, Cheoltaek; Lee, Ju-Jang

    2005-12-01

    The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.

  10. ARK: Autonomous mobile robot in an industrial environment

    NASA Technical Reports Server (NTRS)

    Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.

    1994-01-01

    This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.

  11. SyRoTek--Distance Teaching of Mobile Robotics

    ERIC Educational Resources Information Center

    Kulich, M.; Chudoba, J.; Kosnar, K.; Krajnik, T.; Faigl, J.; Preucil, L.

    2013-01-01

    E-learning is a modern and effective approach for training in various areas and at different levels of education. This paper gives an overview of SyRoTek, an e-learning platform for mobile robotics, artificial intelligence, control engineering, and related domains. SyRoTek provides remote access to a set of fully autonomous mobile robots placed in…

  12. Autonomous mobile robot research using the HERMIES-III robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; Beckerman, M.; Spelt, P.F.

    1989-01-01

    This paper reports on the status and future directions in the research, development and experimental validation of intelligent control techniques for autonomous mobile robots using the HERMIES-III robot at the Center for Engineering Systems Advanced research (CESAR) at Oak Ridge National Laboratory (ORNL). HERMIES-III is the fourth robot in a series of increasingly more sophisticated and capable experimental test beds developed at CESAR. HERMIES-III is comprised of a battery powered, onmi-directional wheeled platform with a seven degree-of-freedom manipulator arm, video cameras, sonar range sensors, laser imaging scanner and a dual computer system containing up to 128 NCUBE nodes in hypercubemore » configuration. All electronics, sensors, computers, and communication equipment required for autonomous operation of HERMIES-III are located on board along with sufficient battery power for three to four hours of operation. The paper first provides a more detailed description of the HERMIES-III characteristics, focussing on the new areas of research and demonstration now possible at CESAR with this new test-bed. The initial experimental program is then described with emphasis placed on autonomous performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES- III). The paper concludes with a discussion of the integration problems and safety considerations necessarily arising from the set-up of an experimental program involving human-scale, multi-autonomous mobile robots performance. 10 refs., 3 figs.« less

  13. Tracked robot controllers for climbing obstacles autonomously

    NASA Astrophysics Data System (ADS)

    Vincent, Isabelle

    2009-05-01

    Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.

  14. Control of autonomous robot using neural networks

    NASA Astrophysics Data System (ADS)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  15. Dual stage potential field method for robotic path planning

    NASA Astrophysics Data System (ADS)

    Singh, Pradyumna Kumar; Parida, Pramod Kumar

    2018-04-01

    Path planning for autonomous mobile robots are the root for all autonomous mobile systems. Various methods are used for optimization of path to be followed by the autonomous mobile robots. Artificial potential field based path planning method is one of the most used methods for the researchers. Various algorithms have been proposed using the potential field approach. But in most of the common problems are encounters while heading towards the goal or target. i.e. local minima problem, zero potential regions problem, complex shaped obstacles problem, target near obstacle problem. In this paper we provide a new algorithm in which two types of potential functions are used one after another. The former one is to use to get the probable points and later one for getting the optimum path. In this algorithm we consider only the static obstacle and goal.

  16. An Analysis of Navigation Algorithms for Smartphones Using J2ME

    NASA Astrophysics Data System (ADS)

    Santos, André C.; Tarrataca, Luís; Cardoso, João M. P.

    Embedded systems are considered one of the most potential areas for future innovations. Two embedded fields that will most certainly take a primary role in future innovations are mobile robotics and mobile computing. Mobile robots and smartphones are growing in number and functionalities, becoming a presence in our daily life. In this paper, we study the current feasibility of a smartphone to execute navigation algorithms. As a test case, we use a smartphone to control an autonomous mobile robot. We tested three navigation problems: Mapping, Localization and Path Planning. For each of these problems, an algorithm has been chosen, developed in J2ME, and tested on the field. Results show the current mobile Java capacity for executing computationally demanding algorithms and reveal the real possibility of using smartphones for autonomous navigation.

  17. Proceedings of the 1989 CESAR/CEA (Center for Engineering Systems Advanced Research/Commissariat a l'Energie Atomique) workshop on autonomous mobile robots (May 30--June 1, 1989)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harber, K.S.; Pin, F.G.

    1990-03-01

    The US DOE Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) and the Commissariat a l'Energie Atomique's (CEA) Office de Robotique et Productique within the Directorat a la Valorization are working toward a long-term cooperative agreement and relationship in the area of Intelligent Systems Research (ISR). This report presents the proceedings of the first CESAR/CEA Workshop on Autonomous Mobile Robots which took place at ORNL on May 30, 31 and June 1, 1989. The purpose of the workshop was to present and discuss methodologies and algorithms under development at the two facilities in themore » area of perception and navigation for autonomous mobile robots in unstructured environments. Experimental demonstration of the algorithms and comparison of some of their features were proposed to take place within the framework of a previously mutually agreed-upon demonstration scenario or base-case.'' The base-case scenario described in detail in Appendix A, involved autonomous navigation by the robot in an a priori unknown environment with dynamic obstacles, in order to reach a predetermined goal. From the intermediate goal location, the robot had to search for and locate a control panel, move toward it, and dock in front of the panel face. The CESAR demonstration was successfully accomplished using the HERMIES-IIB robot while subsets of the CEA demonstration performed using the ARES robot simulation and animation system were presented. The first session of the workshop focused on these experimental demonstrations and on the needs and considerations for establishing benchmarks'' for testing autonomous robot control algorithms.« less

  18. Non-destructive inspection in industrial equipment using robotic mobile manipulation

    NASA Astrophysics Data System (ADS)

    Maurtua, Iñaki; Susperregi, Loreto; Ansuategui, Ander; Fernández, Ane; Ibarguren, Aitor; Molina, Jorge; Tubio, Carlos; Villasante, Cristobal; Felsch, Torsten; Pérez, Carmen; Rodriguez, Jorge R.; Ghrissi, Meftah

    2016-05-01

    MAINBOT project has developed service robots based applications to autonomously execute inspection tasks in extensive industrial plants in equipment that is arranged horizontally (using ground robots) or vertically (climbing robots). The industrial objective has been to provide a means to help measuring several physical parameters in multiple points by autonomous robots, able to navigate and climb structures, handling non-destructive testing sensors. MAINBOT has validated the solutions in two solar thermal plants (cylindrical-parabolic collectors and central tower), that are very demanding from mobile manipulation point of view mainly due to the extension (e.g. a thermal solar plant of 50Mw, with 400 hectares, 400.000 mirrors, 180 km of absorber tubes, 140m height tower), the variability of conditions (outdoor, day-night), safety requirements, etc. Once the technology was validated in simulation, the system was deployed in real setups and different validation tests carried out. In this paper two of the achievements related with the ground mobile inspection system are presented: (1) Autonomous navigation localization and planning algorithms to manage navigation in huge extensions and (2) Non-Destructive Inspection operations: thermography based detection algorithms to provide automatic inspection abilities to the robots.

  19. Mobile Robot Designed with Autonomous Navigation System

    NASA Astrophysics Data System (ADS)

    An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin

    2017-10-01

    With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.

  20. Path planning in GPS-denied environments via collective intelligence of distributed sensor networks

    NASA Astrophysics Data System (ADS)

    Jha, Devesh K.; Chattopadhyay, Pritthi; Sarkar, Soumik; Ray, Asok

    2016-05-01

    This paper proposes a framework for reactive goal-directed navigation without global positioning facilities in unknown dynamic environments. A mobile sensor network is used for localising regions of interest for path planning of an autonomous mobile robot. The underlying theory is an extension of a generalised gossip algorithm that has been recently developed in a language-measure-theoretic setting. The algorithm has been used to propagate local decisions of target detection over a mobile sensor network and thus, it generates a belief map for the detected target over the network. In this setting, an autonomous mobile robot may communicate only with a few mobile sensing nodes in its own neighbourhood and localise itself relative to the communicating nodes with bounded uncertainties. The robot makes use of the knowledge based on the belief of the mobile sensors to generate a sequence of way-points, leading to a possible goal. The estimated way-points are used by a sampling-based motion planning algorithm to generate feasible trajectories for the robot. The proposed concept has been validated by numerical simulation on a mobile sensor network test-bed and a Dubin's car-like robot.

  1. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  2. A small, cheap, and portable reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey

    2005-05-01

    While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.

  3. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  4. Autonomous Navigation by a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Aghazarian, Hrand

    2005-01-01

    ROAMAN is a computer program for autonomous navigation of a mobile robot on a long (as much as hundreds of meters) traversal of terrain. Developed for use aboard a robotic vehicle (rover) exploring the surface of a remote planet, ROAMAN could also be adapted to similar use on terrestrial mobile robots. ROAMAN implements a combination of algorithms for (1) long-range path planning based on images acquired by mast-mounted, wide-baseline stereoscopic cameras, and (2) local path planning based on images acquired by body-mounted, narrow-baseline stereoscopic cameras. The long-range path-planning algorithm autonomously generates a series of waypoints that are passed to the local path-planning algorithm, which plans obstacle-avoiding legs between the waypoints. Both the long- and short-range algorithms use an occupancy-grid representation in computations to detect obstacles and plan paths. Maps that are maintained by the long- and short-range portions of the software are not shared because substantial localization errors can accumulate during any long traverse. ROAMAN is not guaranteed to generate an optimal shortest path, but does maintain the safety of the rover.

  5. Peer-to-peer model for the area coverage and cooperative control of mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Tan, Jindong; Xi, Ning

    2004-09-01

    This paper presents a novel model and distributed algorithms for the cooperation and redeployment of mobile sensor networks. A mobile sensor network composes of a collection of wireless connected mobile robots equipped with a variety of sensors. In such a sensor network, each mobile node has sensing, computation, communication, and locomotion capabilities. The locomotion ability enhances the autonomous deployment of the system. The system can be rapidly deployed to hostile environment, inaccessible terrains or disaster relief operations. The mobile sensor network is essentially a cooperative multiple robot system. This paper first presents a peer-to-peer model to define the relationship between neighboring communicating robots. Delaunay Triangulation and Voronoi diagrams are used to define the geometrical relationship between sensor nodes. This distributed model allows formal analysis for the fusion of spatio-temporal sensory information of the network. Based on the distributed model, this paper discusses a fault tolerant algorithm for autonomous self-deployment of the mobile robots. The algorithm considers the environment constraints, the presence of obstacles and the nonholonomic constraints of the robots. The distributed algorithm enables the system to reconfigure itself such that the area covered by the system can be enlarged. Simulation results have shown the effectiveness of the distributed model and deployment algorithms.

  6. AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program

    NASA Astrophysics Data System (ADS)

    Gothard, Benny M.

    2002-02-01

    One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.

  7. Experiments in autonomous robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamel, W.R.

    1987-01-01

    The Center for Engineering Systems Advanced Research (CESAR) is performing basic research in autonomous robotics for energy-related applications in hazardous environments. The CESAR research agenda includes a strong experimental component to assure practical evaluation of new concepts and theories. An evolutionary sequence of mobile research robots has been planned to support research in robot navigation, world sensing, and object manipulation. A number of experiments have been performed in studying robot navigation and path planning with planar sonar sensing. Future experiments will address more complex tasks involving three-dimensional sensing, dexterous manipulation, and human-scale operations.

  8. Autonomous mobile robotic system for supporting counterterrorist and surveillance operations

    NASA Astrophysics Data System (ADS)

    Adamczyk, Marek; Bulandra, Kazimierz; Moczulski, Wojciech

    2017-10-01

    Contemporary research on mobile robots concerns applications to counterterrorist and surveillance operations. The goal is to develop systems that are capable of supporting the police and special forces by carrying out such operations. The paper deals with a dedicated robotic system for surveillance of large objects such as airports, factories, military bases, and many others. The goal is to trace unauthorised persons who try to enter to the guarded area, document the intrusion and report it to the surveillance centre, and then warn the intruder by sound messages and eventually subdue him/her by stunning through acoustic effect of great power. The system consists of several parts. An armoured four-wheeled robot assures required mobility of the system. The robot is equipped with a set of sensors including 3D mapping system, IR and video cameras, and microphones. It communicates with the central control station (CCS) by means of a wideband wireless encrypted system. A control system of the robot can operate autonomously, and under remote control. In the autonomous mode the robot follows the path planned by the CCS. Once an intruder has been detected, the robot can adopt its plan to allow tracking him/her. Furthermore, special procedures of treatment of the intruder are applied including warning about the breach of the border of the protected area, and incapacitation of an appropriately selected very loud sound until a patrol of guards arrives. Once getting stuck the robot can contact the operator who can remotely solve the problem the robot is faced with.

  9. Interaction dynamics of multiple autonomous mobile robots in bounded spatial domains

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1989-01-01

    A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.

  10. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  11. Intelligent behavior generator for autonomous mobile robots using planning-based AI decision making and supervisory control logic

    NASA Astrophysics Data System (ADS)

    Shah, Hitesh K.; Bahl, Vikas; Martin, Jason; Flann, Nicholas S.; Moore, Kevin L.

    2002-07-01

    In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) have been funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). One among the several out growths of this work has been the development of a grammar-based approach to intelligent behavior generation for commanding autonomous robotic vehicles. In this paper we describe the use of this grammar for enabling autonomous behaviors. A supervisory task controller (STC) sequences high-level action commands (taken from the grammar) to be executed by the robot. It takes as input a set of goals and a partial (static) map of the environment and produces, from the grammar, a flexible script (or sequence) of the high-level commands that are to be executed by the robot. The sequence is derived by a planning function that uses a graph-based heuristic search (A* -algorithm). Each action command has specific exit conditions that are evaluated by the STC following each task completion or interruption (in the case of disturbances or new operator requests). Depending on the system's state at task completion or interruption (including updated environmental and robot sensor information), the STC invokes a reactive response. This can include sequencing the pending tasks or initiating a re-planning event, if necessary. Though applicable to a wide variety of autonomous robots, an application of this approach is demonstrated via simulations of ODIS, an omni-directional inspection system developed for security applications.

  12. Autonomous stair-climbing with miniature jumping robots.

    PubMed

    Stoeter, Sascha A; Papanikolopoulos, Nikolaos

    2005-04-01

    The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.

  13. On-Line Point Positioning with Single Frame Camera Data

    DTIC Science & Technology

    1992-03-15

    tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile

  14. Maintaining Limited-Range Connectivity Among Second-Order Agents

    DTIC Science & Technology

    2016-07-07

    we consider ad-hoc networks of robotic agents with double integrator dynamics. For such networks, the connectivity maintenance problems are: (i) do...hoc networks of mobile autonomous agents. This loose ter- minology refers to groups of robotic agents with limited mobility and communica- tion...connectivity can be preserved. 3.1. Networks of robotic agents with second-order dynamics and the connectivity maintenance problem. We begin by

  15. Context recognition and situation assessment in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Yavnai, Arie

    1993-05-01

    The capability to recognize the operating context and to assess the situation in real-time is needed, if a high functionality autonomous mobile robot has to react properly and effectively to continuously changing situations and events, either external or internal, while the robot is performing its assigned tasks. A new approach and architecture for context recognition and situation assessment module (CORSA) is presented in this paper. CORSA is a multi-level information processing module which consists of adaptive decision and classification algorithms. It performs dynamic mapping from the data space to the context space, and dynamically decides on the context class. Learning mechanism is employed to update the decision variables so as to minimize the probability of misclassification. CORSA is embedded within the Mission Manager module of the intelligent autonomous hyper-controller (IAHC) of the mobile robot. The information regarding operating context, events and situation is then communicated to other modules of the IAHC where it is used to: (a) select the appropriate action strategy; (b) support the processes to arbitration and conflict resolution between reflexive behaviors and reasoning-driven behaviors; (c) predict future events and situations; and (d) determine criteria and priorities for planning, replanning, and decision making.

  16. Application of autonomous robotized systems for the collection of nearshore topographic changing and hydrodynamic measurements

    NASA Astrophysics Data System (ADS)

    Belyakov, Vladimir; Makarov, Vladimir; Zezyulin, Denis; Kurkin, Andrey; Pelinovsky, Efim

    2015-04-01

    Hazardous phenomena in the coastal zone lead to the topographic changing which are difficulty inspected by traditional methods. It is why those autonomous robots are used for collection of nearshore topographic and hydrodynamic measurements. The robot RTS-Hanna is well-known (Wubbold, F., Hentschel, M., Vousdoukas, M., and Wagner, B. Application of an autonomous robot for the collection of nearshore topographic and hydrodynamic measurements. Coastal Engineering Proceedings, 2012, vol. 33, Paper 53). We describe here several constructions of mobile systems developed in Laboratory "Transported Machines and Transported Complexes", Nizhny Novgorod State Technical University. They can be used in the field surveys and monitoring of wave regimes nearshore.

  17. An Efficient Model-Based Image Understanding Method for an Autonomous Vehicle.

    DTIC Science & Technology

    1997-09-01

    The problem discussed in this dissertation is the development of an efficient method for visual navigation of autonomous vehicles . The approach is to... autonomous vehicles . Thus the new method is implemented as a component of the image-understanding system in the autonomous mobile robot Yamabico-11 at

  18. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System

    PubMed Central

    Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan

    2017-01-01

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964

  19. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System.

    PubMed

    Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan

    2017-09-10

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

  20. Millimeter-scale MEMS enabled autonomous systems: system feasibility and mobility

    NASA Astrophysics Data System (ADS)

    Pulskamp, Jeffrey S.

    2012-06-01

    Millimeter-scale robotic systems based on highly integrated microelectronics and micro-electromechanical systems (MEMS) could offer unique benefits and attributes for small-scale autonomous systems. This extreme scale for robotics will naturally constrain the realizable system capabilities significantly. This paper assesses the feasibility of developing such systems by defining the fundamental design trade spaces between component design variables and system level performance parameters. This permits the development of mobility enabling component technologies within a system relevant context. Feasible ranges of system mass, required aerodynamic power, available battery power, load supported power, flight endurance, and required leg load bearing capability are presented for millimeter-scale platforms. The analysis illustrates the feasibility of developing both flight capable and ground mobile millimeter-scale autonomous systems while highlighting the significant challenges that must be overcome to realize their potential.

  1. Teaching and implementing autonomous robotic lab walkthroughs in a biotech laboratory through model-based visual tracking

    NASA Astrophysics Data System (ADS)

    Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois

    2010-01-01

    After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.

  2. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  3. Towards Autonomous Inspection of Space Systems Using Mobile Robotic Sensor Platforms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Saad, Ashraf; Litt, Jonathan S.

    2007-01-01

    The space transportation systems required to support NASA's Exploration Initiative will demand a high degree of reliability to ensure mission success. This reliability can be realized through autonomous fault/damage detection and repair capabilities. It is crucial that such capabilities are incorporated into these systems since it will be impractical to rely upon Extra-Vehicular Activity (EVA), visual inspection or tele-operation due to the costly, labor-intensive and time-consuming nature of these methods. One approach to achieving this capability is through the use of an autonomous inspection system comprised of miniature mobile sensor platforms that will cooperatively perform high confidence inspection of space vehicles and habitats. This paper will discuss the efforts to develop a small scale demonstration test-bed to investigate the feasibility of using autonomous mobile sensor platforms to perform inspection operations. Progress will be discussed in technology areas including: the hardware implementation and demonstration of robotic sensor platforms, the implementation of a hardware test-bed facility, and the investigation of collaborative control algorithms.

  4. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    NASA Astrophysics Data System (ADS)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  5. Adding navigation, artificial audition and vital sign monitoring capabilities to a telepresence mobile robot for remote home care applications.

    PubMed

    Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois

    2017-07-01

    A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.

  6. An architecture for an autonomous learning robot

    NASA Technical Reports Server (NTRS)

    Tillotson, Brian

    1988-01-01

    An autonomous learning device must solve the example bounding problem, i.e., it must divide the continuous universe into discrete examples from which to learn. We describe an architecture which incorporates an example bounder for learning. The architecture is implemented in the GPAL program. An example run with a real mobile robot shows that the program learns and uses new causal, qualitative, and quantitative relationships.

  7. Automatic detection and classification of obstacles with applications in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.

    2016-04-01

    Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.

  8. A design strategy for autonomous systems

    NASA Technical Reports Server (NTRS)

    Forster, Pete

    1989-01-01

    Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.

  9. Autonomous Mobile Platform for Research in Cooperative Robotics

    NASA Technical Reports Server (NTRS)

    Daemi, Ali; Pena, Edward; Ferguson, Paul

    1998-01-01

    This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.

  10. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.

    PubMed

    Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning

    2018-03-16

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  11. Sandia National Laboratories proof-of-concept robotic security vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, J.J.; Jones, D.P.; Klarer, P.R.

    1989-01-01

    Several years ago Sandia National Laboratories developed a prototype interior robot that could navigate autonomously inside a large complex building to air and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modified andmore » integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities. 2 refs., 3 figs.« less

  12. Interaction dynamics of multiple mobile robots with simple navigation strategies

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1989-01-01

    The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.

  13. Introduction to Autonomous Mobile Robotics Using "Lego Mindstorms" NXT

    ERIC Educational Resources Information Center

    Akin, H. Levent; Meriçli, Çetin; Meriçli, Tekin

    2013-01-01

    Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the…

  14. Technologies for Human Exploration

    NASA Technical Reports Server (NTRS)

    Drake, Bret G.

    2014-01-01

    Access to Space, Chemical Propulsion, Advanced Propulsion, In-Situ Resource Utilization, Entry, Descent, Landing and Ascent, Humans and Robots Working Together, Autonomous Operations, In-Flight Maintenance, Exploration Mobility, Power Generation, Life Support, Space Suits, Microgravity Countermeasures, Autonomous Medicine, Environmental Control.

  15. Intelligent Adaptive Systems: Literature Research of Design Guidance for Intelligent Adaptive Automation and Interfaces

    DTIC Science & Technology

    2007-09-01

    behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic

  16. Adaptive Control for Autonomous Navigation of Mobile Robots Considering Time Delay and Uncertainty

    NASA Astrophysics Data System (ADS)

    Armah, Stephen Kofi

    Autonomous control of mobile robots has attracted considerable attention of researchers in the areas of robotics and autonomous systems during the past decades. One of the goals in the field of mobile robotics is development of platforms that robustly operate in given, partially unknown, or unpredictable environments and offer desired services to humans. Autonomous mobile robots need to be equipped with effective, robust and/or adaptive, navigation control systems. In spite of enormous reported work on autonomous navigation control systems for mobile robots, achieving the goal above is still an open problem. Robustness and reliability of the controlled system can always be improved. The fundamental issues affecting the stability of the control systems include the undesired nonlinear effects introduced by actuator saturation, time delay in the controlled system, and uncertainty in the model. This research work develops robustly stabilizing control systems by investigating and addressing such nonlinear effects through analytical, simulations, and experiments. The control systems are designed to meet specified transient and steady-state specifications. The systems used for this research are ground (Dr Robot X80SV) and aerial (Parrot AR.Drone 2.0) mobile robots. Firstly, an effective autonomous navigation control system is developed for X80SV using logic control by combining 'go-to-goal', 'avoid-obstacle', and 'follow-wall' controllers. A MATLAB robot simulator is developed to implement this control algorithm and experiments are conducted in a typical office environment. The next stage of the research develops an autonomous position (x, y, and z) and attitude (roll, pitch, and yaw) controllers for a quadrotor, and PD-feedback control is used to achieve stabilization. The quadrotor's nonlinear dynamics and kinematics are implemented using MATLAB S-function to generate the state output. Secondly, the white-box and black-box approaches are used to obtain a linearized second-order altitude models for the quadrotor, AR.Drone 2.0. Proportional (P), pole placement or proportional plus velocity (PV), linear quadratic regulator (LQR), and model reference adaptive control (MRAC) controllers are designed and validated through simulations using MATLAB/Simulink. Control input saturation and time delay in the controlled systems are also studied. MATLAB graphical user interface (GUI) and Simulink programs are developed to implement the controllers on the drone. Thirdly, the time delay in the drone's control system is estimated using analytical and experimental methods. In the experimental approach, the transient properties of the experimental altitude responses are compared to those of simulated responses. The analytical approach makes use of the Lambert W function to obtain analytical solutions of scalar first-order delay differential equations (DDEs). A time-delayed P-feedback control system (retarded type) is used in estimating the time delay. Then an improved system performance is obtained by incorporating the estimated time delay in the design of the PV control system (neutral type) and PV-MRAC control system. Furthermore, the stability of a parametric perturbed linear time-invariant (LTI) retarded-type system is studied. This is done by analytically calculating the stability radius of the system. Simulation of the control system is conducted to confirm the stability. This robust control design and uncertainty analysis are conducted for first-order and second-order quadrotor models. Lastly, the robustly designed PV and PV-MRAC control systems are used to autonomously track multiple waypoints. Also, the robustness of the PV-MRAC controller is tested against a baseline PV controller using the payload capability of the drone. It is shown that the PV-MRAC offers several benefits over the fixed-gain approach of the PV controller. The adaptive control is found to offer enhanced robustness to the payload fluctuations.

  17. The Embudito Mission: A Case Study of the Systematics of Autonomous Ground Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    EICKER,PATRICK J.

    2001-02-01

    Ground mobile robots are much in the mind of defense planners at this time, being considered for a significant variety of missions with a diversity ranging from logistics supply to reconnaissance and surveillance. While there has been a very large amount of basic research funded in the last quarter century devoted to mobile robots and their supporting component technologies, little of this science base has been fully developed and deployed--notable exceptions being NASA's Mars rover and several terrestrial derivatives. The material in this paper was developed as a first exemplary step in the development of a more systematic approach tomore » the R and D of ground mobile robots.« less

  18. Parallel-distributed mobile robot simulator

    NASA Astrophysics Data System (ADS)

    Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo

    1996-06-01

    The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.

  19. TARDEC Overview: Ground Vehicle Power and Mobility

    DTIC Science & Technology

    2011-02-04

    Fuel & Water Distribution • Force Sustainment • Construction Equipment • Bridging • Assured Mobility Systems Robotics • TALON • PackBot • MARCbot...Equipment • Mechanical Countermine Equipment • Tactical Bridging Intelligent Ground Systems • Autonomous Robotics Systems • Safe Operations...Test Cell • Hybrid Electric Reconfigurable Moveable Integration Testbed (HERMIT) • Electro-chemical Analysis and Research Lab (EARL) • Battery Lab • Air

  20. Welding torch trajectory generation for hull joining using autonomous welding mobile robot

    NASA Astrophysics Data System (ADS)

    Hascoet, J. Y.; Hamilton, K.; Carabin, G.; Rauch, M.; Alonso, M.; Ares, E.

    2012-04-01

    Shipbuilding processes involve highly dangerous manual welding operations. Welding of ship hulls presents a hazardous environment for workers. This paper describes a new robotic system, developed by the SHIPWELD consortium, that moves autonomously on the hull and automatically executes the required welding processes. Specific focus is placed on the trajectory control of such a system and forms the basis for the discussion in this paper. It includes a description of the robotic hardware design as well as some methodology used to establish the torch trajectory control.

  1. Research and development at ORNL/CESAR towards cooperating robotic systems for hazardous environments

    NASA Technical Reports Server (NTRS)

    Mann, R. C.; Fujimura, K.; Unseren, M. A.

    1992-01-01

    One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of position and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, R.C.; Fujimura, K.; Unseren, M.A.

    One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR)at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of positionmore » and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace. 15 refs., 3 figs.« less

  3. The Challenge of Planning and Execution for Spacecraft Mobile Robots

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.; Gawdiak, Yuri; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The need for spacecraft mobile robots continues to grow. These robots offer the potential to increase the capability, productivity, and duration of space missions while decreasing mission risk and cost. Spacecraft Mobile Robots (SMRs) can serve a number of functions inside and outside of spacecraft from simpler tasks, such as performing visual diagnostics and crew support, to more complex tasks, such as performing maintenance and in-situ construction. One of the predominant challenges to deploying SMRs is to reduce the need for direct operator interaction. Teleoperation is often not practical due to the communication latencies incurred because of the distances involved and in many cases a crewmember would directly perform a task rather than teleoperate a robot to do it. By integrating a mixed-initiative constraint-based planner with an executive that supports adjustably autonomous control, we intend to demonstrate the feasibility of autonomous SMRs by deploying one inside the International Space Station (ISS) and demonstrate in simulation one that operates outside of the ISS. This paper discusses the progress made at NASA towards this end, the challenges ahead, and concludes with an invitation to the research community to participate.

  4. Mapping planetary caves with an autonomous, heterogeneous robot team

    NASA Astrophysics Data System (ADS)

    Husain, Ammar; Jones, Heather; Kannan, Balajee; Wong, Uland; Pimentel, Tiago; Tang, Sarah; Daftry, Shreyansh; Huber, Steven; Whittaker, William L.

    Caves on other planetary bodies offer sheltered habitat for future human explorers and numerous clues to a planet's past for scientists. While recent orbital imagery provides exciting new details about cave entrances on the Moon and Mars, the interiors of these caves are still unknown and not observable from orbit. Multi-robot teams offer unique solutions for exploration and modeling subsurface voids during precursor missions. Robot teams that are diverse in terms of size, mobility, sensing, and capability can provide great advantages, but this diversity, coupled with inherently distinct low-level behavior architectures, makes coordination a challenge. This paper presents a framework that consists of an autonomous frontier and capability-based task generator, a distributed market-based strategy for coordinating and allocating tasks to the different team members, and a communication paradigm for seamless interaction between the different robots in the system. Robots have different sensors, (in the representative robot team used for testing: 2D mapping sensors, 3D modeling sensors, or no exteroceptive sensors), and varying levels of mobility. Tasks are generated to explore, model, and take science samples. Based on an individual robot's capability and associated cost for executing a generated task, a robot is autonomously selected for task execution. The robots create coarse online maps and store collected data for high resolution offline modeling. The coordination approach has been field tested at a mock cave site with highly-unstructured natural terrain, as well as an outdoor patio area. Initial results are promising for applicability of the proposed multi-robot framework to exploration and modeling of planetary caves.

  5. Development and Control of Multi-Degree-of-Freedom Mobile Robot for Acquisition of Road Environmental Modes

    NASA Astrophysics Data System (ADS)

    Murata, Naoya; Katsura, Seiichiro

    Acquisition of information about the environment around a mobile robot is important for purposes such as controlling the robot from a remote location and in situations such as that when the robot is running autonomously. In many researches, audiovisual information is used. However, acquisition of information about force sensation, which is included in environmental information, has not been well researched. The mobile-hapto, which is a remote control system with force information, has been proposed, but the robot used for the system can acquire only the horizontal component of forces. For this reason, in this research, a three-wheeled mobile robot that consists of seven actuators was developed and its control system was constructed. It can get information on horizontal and vertical forces without using force sensors. By using this robot, detailed information on the forces in the environment can be acquired and the operability of the robot and its capability to adjust to the environment are expected to improve.

  6. Integrated mobile robot control

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Thorpe, Charles

    1991-01-01

    This paper describes the structure, implementation, and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation, path specification and tracking, human interfaces, fast communication, and multiple client support. The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Navlab autonomous vehicle. In addition, performance results from positioning and tracking systems are reported and analyzed.

  7. Integrated mobile robot control

    NASA Astrophysics Data System (ADS)

    Amidi, Omead; Thorpe, Chuck E.

    1991-03-01

    This paper describes the strucwre implementation and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation path specification and hacking human interfaces fast communication and multiple client support The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Naviab autonomous vehicle. In addition performance results from positioning and tracking systems are reported and analyzed.

  8. Evolution of a radio communication relay system

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa G.; Pezeshkian, Narek; Hart, Abraham; Burmeister, Aaron; Holz, Kevin; Neff, Joseph; Roth, Leif

    2013-05-01

    Providing long-distance non-line-of-sight control for unmanned ground robots has long been recognized as a problem, considering the nature of the required high-bandwidth radio links. In the early 2000s, the DARPA Mobile Autonomous Robot Software (MARS) program funded the Space and Naval Warfare Systems Center (SSC) Pacific to demonstrate a capability for autonomous mobile communication relaying on a number of Pioneer laboratory robots. This effort also resulted in the development of ad hoc networking radios and software that were later leveraged in the development of a more practical and logistically simpler system, the Automatically Deployed Communication Relays (ADCR). Funded by the Joint Ground Robotics Enterprise and internally by SSC Pacific, several generations of ADCR systems introduced increasingly more capable hardware and software for automatic maintenance of communication links through deployment of static relay nodes from mobile robots. This capability was finally tapped in 2010 to fulfill an urgent need from theater. 243 kits of ruggedized, robot-deployable communication relays were produced and sent to Afghanistan to extend the range of EOD and tactical ground robots in 2012. This paper provides a summary of the evolution of the radio relay technology at SSC Pacific, and then focuses on the latest two stages, the Manually-Deployed Communication Relays and the latest effort to automate the deployment of these ruggedized and fielded relay nodes.

  9. Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)

    NASA Technical Reports Server (NTRS)

    Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey

    1990-01-01

    Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.

  10. Working and Learning with Knowledge in the Lobes of a Humanoid's Mind

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David

    2003-01-01

    Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).

  11. Solar Thermal Utility-Scale Joint Venture Program (USJVP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANCINI,THOMAS R.

    2001-04-01

    Several years ago Sandia National Laboratories developed a prototype interior robot [1] that could navigate autonomously inside a large complex building to aid and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modifiedmore » and integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities.« less

  12. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    NASA Astrophysics Data System (ADS)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  13. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    PubMed Central

    Dou, Lihua; Su, Zhong; Liu, Ning

    2018-01-01

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515

  14. Behavior generation strategy of artificial behavioral system by self-learning paradigm for autonomous robot tasks

    NASA Astrophysics Data System (ADS)

    Dağlarli, Evren; Temeltaş, Hakan

    2008-04-01

    In this study, behavior generation and self-learning paradigms are investigated for the real-time applications of multi-goal mobile robot tasks. The method is capable to generate new behaviors and it combines them in order to achieve multi goal tasks. The proposed method is composed from three layers: Behavior Generating Module, Coordination Level and Emotion -Motivation Level. Last two levels use Hidden Markov models to manage dynamical structure of behaviors. The kinematics and dynamic model of the mobile robot with non-holonomic constraints are considered in the behavior based control architecture. The proposed method is tested on a four-wheel driven and four-wheel steered mobile robot with constraints in simulation environment and results are obtained successfully.

  15. Robust Planning for Autonomous Navigation of Mobile Robots in Unstructured, Dynamic Environments: An LDRD Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    EISLER, G. RICHARD

    This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Robust Planning for Autonomous Navigation of Mobile Robots In Unstructured, Dynamic Environments (AutoNav)''. The project goal was to develop an algorithmic-driven, multi-spectral approach to point-to-point navigation characterized by: segmented on-board trajectory planning, self-contained operation without human support for mission duration, and the development of appropriate sensors and algorithms to navigate unattended. The project was partially successful in achieving gains in sensing, path planning, navigation, and guidance. One of three experimental platforms, the Minimalist Autonomous Testbed, used a repetitive sense-and-re-plan combination to demonstratemore » the majority of elements necessary for autonomous navigation. However, a critical goal for overall success in arbitrary terrain, that of developing a sensor that is able to distinguish true obstacles that need to be avoided as a function of vehicle scale, still needs substantial research to bring to fruition.« less

  16. Autonomous robotic platforms for locating radio sources buried under rubble

    NASA Astrophysics Data System (ADS)

    Tasu, A. S.; Anchidin, L.; Tamas, R.; Paun, M.; Danisor, A.; Petrescu, T.

    2016-12-01

    This paper deals with the use of autonomous robotic platforms able to locate radio signal sources such as mobile phones, buried under collapsed buildings as a result of earthquakes, natural disasters, terrorism, war, etc. This technique relies on averaging position data resulting from a propagation model implemented on the platform and the data acquired by robotic platforms at the disaster site. That allows us to calculate the approximate position of radio sources buried under the rubble. Based on measurements, a radio map of the disaster site is made, very useful for locating victims and for guiding specific rubble lifting machinery, by assuming that there is a victim next to a mobile device detected by the robotic platform; by knowing the approximate position, the lifting machinery does not risk to further hurt the victims. Moreover, by knowing the positions of the victims, the reaction time is decreased, and the chances of survival for the victims buried under the rubble, are obviously increased.

  17. Optimal sensor fusion for land vehicle navigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, J.D.

    1990-10-01

    Position location is a fundamental requirement in autonomous mobile robots which record and subsequently follow x,y paths. The Dept. of Energy, Office of Safeguards and Security, Robotic Security Vehicle (RSV) program involves the development of an autonomous mobile robot for patrolling a structured exterior environment. A straight-forward method for autonomous path-following has been adopted and requires digitizing'' the desired road network by storing x,y coordinates every 2m along the roads. The position location system used to define the locations consists of a radio beacon system which triangulates position off two known transponders, and dead reckoning with compass and odometer. Thismore » paper addresses the problem of combining these two measurements to arrive at a best estimate of position. Two algorithms are proposed: the optimal'' algorithm treats the measurements as random variables and minimizes the estimate variance, while the average error'' algorithm considers the bias in dead reckoning and attempts to guarantee an average error. Data collected on the algorithms indicate that both work well in practice. 2 refs., 7 figs.« less

  18. The magic glove: a gesture-based remote controller for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark

    2012-01-01

    This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.

  19. Research in mobile robotics at ORNL/CESAR (Oak Ridge National Laboratory/Center for Engineering Systems Advanced Research)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, R.C.; Weisbin, C.R.; Pin, F.G.

    1989-01-01

    This paper reviews ongoing and planned research with mobile autonomous robots at the Oak Ridge National Laboratory (ORNL), Center for Engineering Systems Advanced Research (CESAR). Specifically we report on results obtained with the robot HERMIES-IIB in navigation, intelligent sensing, learning, and on-board parallel computing in support of these functions. We briefly summarize an experiment with HERMIES-IIB that demonstrates the capability of smooth transitions between robot autonomy and tele-operation. This experiment results from collaboration among teams at the Universities of Florida, Michigan, Tennessee, and Texas; and ORNL in a program targeted at robotics for advanced nuclear power stations. We conclude bymore » summarizing ongoing R D with our new mobile robot HERMIES-III which is equipped with a seven degree-of-freedom research manipulator arm. 12 refs., 4 figs.« less

  20. The Generation of Situational Awareness within Autonomous Systems - A Near to Mid term Study - Analysis

    DTIC Science & Technology

    2006-07-01

    mobility in complex terrain, robot system designers are still seeking workable processes for mapbuilding, with enduring problems that either require...human) robot system designers /users can seek to control the consequences of robot actions, deliberate or otherwise. A notable particular application...operators a sufficient feeling of presence; if not, robot system designers will have to provide autonomy to the robot to make up for the gaps in human input

  1. A hardware/software environment to support R D in intelligent machines and mobile robotic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, R.C.

    1990-01-01

    The Center for Engineering Systems Advanced Research (CESAR) serves as a focal point at the Oak Ridge National Laboratory (ORNL) for basic and applied research in intelligent machines. R D at CESAR addresses issues related to autonomous systems, unstructured (i.e. incompletely known) operational environments, and multiple performing agents. Two mobile robot prototypes (HERMIES-IIB and HERMIES-III) are being used to test new developments in several robot component technologies. This paper briefly introduces the computing environment at CESAR which includes three hypercube concurrent computers (two on-board the mobile robots), a graphics workstation, VAX, and multiple VME-based systems (several on-board the mobile robots).more » The current software environment at CESAR is intended to satisfy several goals, e.g.: code portability, re-usability in different experimental scenarios, modularity, concurrent computer hardware transparent to applications programmer, future support for multiple mobile robots, support human-machine interface modules, and support for integration of software from other, geographically disparate laboratories with different hardware set-ups. 6 refs., 1 fig.« less

  2. Neural Network-Based Landmark Recognition and Navigation with IAMRs. Understanding the Principles of Thought and Behavior.

    ERIC Educational Resources Information Center

    Doty, Keith L.

    1999-01-01

    Research on neural networks and hippocampal function demonstrating how mammals construct mental maps and develop navigation strategies is being used to create Intelligent Autonomous Mobile Robots (IAMRs). Such robots are able to recognize landmarks and navigate without "vision." (SK)

  3. Video Guidance Sensor for Surface Mobility Operations

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth R.; Fischer, Richard; Bryan, Thomas; Howell, Joe; Howard, Ricky; Peters, Bruce

    2008-01-01

    Robotic systems and surface mobility will play an increased role in future exploration missions. Unlike the LRV during Apollo era which was an astronaut piloted vehicle future systems will include teleoperated and semi-autonomous operations. The tasks given to these vehicles will run the range from infrastructure maintenance, ISRU, and construction to name a few. A common task that may be performed would be the retrieval and deployment of trailer mounted equipment. Operational scenarios may require these operations to be performed remotely via a teleoperated mode,or semi-autonomously. This presentation describes the on-going project to adapt the Automated Rendezvous and Capture (AR&C) sensor developed at the Marshall Space Flight Center for use in an automated trailer pick-up and deployment operation. The sensor which has been successfully demonstrated on-orbit has been mounted on an iRobot/John Deere RGATOR autonomous vehicle for this demonstration which will be completed in the March 2008 time-frame.

  4. Does It "Want" or "Was It Programmed to..."? Kindergarten Children's Explanations of an Autonomous Robot's Adaptive Functioning

    ERIC Educational Resources Information Center

    Levy, Sharona T.; Mioduser, David

    2008-01-01

    This study investigates young children's perspectives in explaining a self-regulating mobile robot, as they learn to program its behaviors from rules. We explore their descriptions of a robot in action to determine the nature of their explanatory frameworks: psychological or technological. We have also studied the role of an adult's intervention…

  5. Autonomous intelligent assembly systems LDRD 105746 final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2013-04-01

    This report documents a three-year to develop technology that enables mobile robots to perform autonomous assembly tasks in unstructured outdoor environments. This is a multi-tier problem that requires an integration of a large number of different software technologies including: command and control, estimation and localization, distributed communications, object recognition, pose estimation, real-time scanning, and scene interpretation. Although ultimately unsuccessful in achieving a target brick stacking task autonomously, numerous important component technologies were nevertheless developed. Such technologies include: a patent-pending polygon snake algorithm for robust feature tracking, a color grid algorithm for uniquely identification and calibration, a command and control frameworkmore » for abstracting robot commands, a scanning capability that utilizes a compact robot portable scanner, and more. This report describes this project and these developed technologies.« less

  6. A fuzzy logic controller for an autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Yen, John; Pfluger, Nathan

    1993-01-01

    The ability of a mobile robot system to plan and move intelligently in a dynamic system is needed if robots are to be useful in areas other than controlled environments. An example of a use for this system is to control an autonomous mobile robot in a space station, or other isolated area where it is hard or impossible for human life to exist for long periods of time (e.g., Mars). The system would allow the robot to be programmed to carry out the duties normally accomplished by a human being. Some of the duties that could be accomplished include operating instruments, transporting objects, and maintenance of the environment. The main focus of our early work has been on developing a fuzzy controller that takes a path and adapts it to a given environment. The robot only uses information gathered from the sensors, but retains the ability to avoid dynamically placed obstacles near and along the path. Our fuzzy logic controller is based on the following algorithm: (1) determine the desired direction of travel; (2) determine the allowed direction of travel; and (3) combine the desired and allowed directions in order to determine a direciton that is both desired and allowed. The desired direction of travel is determined by projecting ahead to a point along the path that is closer to the goal. This gives a local direction of travel for the robot and helps to avoid obstacles.

  7. [Service robots in elderly care. Possible application areas and current state of developments].

    PubMed

    Graf, B; Heyer, T; Klein, B; Wallhoff, F

    2013-08-01

    The term "Service robotics" describes semi- or fully autonomous technical systems able to perform services useful to the well-being of humans. Service robots have the potential to support and disburden both persons in need of care as well as nursing care staff. In addition, they can be used in prevention and rehabilitation in order to reduce or avoid the need for help. Products currently available to support people in domestic environments are mainly cleaning or remote-controlled communication robots. Examples of current research activities are the (further) development of mobile robots as advanced communication assistants or the development of (semi) autonomous manipulation aids and multifunctional household assistants. Transport robots are commonly used in many hospitals. In nursing care facilities, the first evaluations have already been made. So-called emotional robots are now sold as products and can be used for therapeutic, occupational, or entertainment activities.

  8. Autonomous mobile platform for enhanced situational awareness in Mass Casualty Incidents.

    PubMed

    Yang, Dongyi; Schafer, James; Wang, Sili; Ganz, Aura

    2014-01-01

    To enhance the efficiency of the search and rescue process of a Mass Casualty Incident, we introduce a low cost autonomous mobile platform. The mobile platform motion is controlled by an Android Smartphone mounted on a robot. The pictures and video captured by the Smartphone camera can significantly enhance the situational awareness of the incident commander leading to a more efficient search and rescue process. Moreover, the active RFID readers mounted on the mobile platform can improve the localization accuracy of victims in the disaster site in areas where the paramedics are not present, reducing the triage and evacuation time.

  9. Experiments with a small behaviour controlled planetary rover

    NASA Technical Reports Server (NTRS)

    Miller, David P.; Desai, Rajiv S.; Gat, Erann; Ivlev, Robert; Loch, John

    1993-01-01

    A series of experiments that were performed on the Rocky 3 robot is described. Rocky 3 is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and depositing the sample in a container at its home base. The robot is programmed according to a reactive behavior control paradigm using the ALFA programming language. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mobile robot control systems. The code for Rocky 3 runs on an eight bit processor and uses about ten k of memory.

  10. Evaluation of a Home Biomonitoring Autonomous Mobile Robot.

    PubMed

    Dorronzoro Zubiete, Enrique; Nakahata, Keigo; Imamoglu, Nevrez; Sekine, Masashi; Sun, Guanghao; Gomez, Isabel; Yu, Wenwei

    2016-01-01

    Increasing population age demands more services in healthcare domain. It has been shown that mobile robots could be a potential solution to home biomonitoring for the elderly. Through our previous studies, a mobile robot system that is able to track a subject and identify his daily living activities has been developed. However, the system has not been tested in any home living scenarios. In this study we did a series of experiments to investigate the accuracy of activity recognition of the mobile robot in a home living scenario. The daily activities tested in the evaluation experiment include watching TV and sleeping. A dataset recorded by a distributed distance-measuring sensor network was used as a reference to the activity recognition results. It was shown that the accuracy is not consistent for all the activities; that is, mobile robot could achieve a high success rate in some activities but a poor success rate in others. It was found that the observation position of the mobile robot and subject surroundings have high impact on the accuracy of the activity recognition, due to the variability of the home living daily activities and their transitional process. The possibility of improvement of recognition accuracy has been shown too.

  11. Multi-Robot Assembly Strategies and Metrics.

    PubMed

    Marvel, Jeremy A; Bostelman, Roger; Falco, Joe

    2018-02-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.

  12. Multi-Robot Assembly Strategies and Metrics

    PubMed Central

    MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE

    2018-01-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234

  13. A Mobile Robot for Small Object Handling

    NASA Astrophysics Data System (ADS)

    Fišer, Ondřej; Szűcsová, Hana; Grimmer, Vladimír; Popelka, Jan; Vonásek, Vojtěch; Krajník, Tomáš; Chudoba, Jan

    The aim of this paper is to present an intelligent autonomous robot capable of small object manipulation. The design of the robot is influenced mainly by the rules of EUROBOT 09 competition. In this challenge, two robots pick up objects scattered on a planar rectangular playfield and use these elements to build models of Hellenistic temples. This paper describes the robot hardware, i.e. electro-mechanics of the drive, chassis and manipulator, as well as the software, i.e. localization, collision avoidance, motion control and planning algorithms.

  14. Ambient intelligence application based on environmental measurements performed with an assistant mobile robot.

    PubMed

    Martinez, Dani; Teixidó, Mercè; Font, Davinia; Moreno, Javier; Tresanchez, Marcel; Marco, Santiago; Palacín, Jordi

    2014-03-27

    This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile.

  15. Ambient Intelligence Application Based on Environmental Measurements Performed with an Assistant Mobile Robot

    PubMed Central

    Martinez, Dani; Teixidó, Mercè; Font, Davinia; Moreno, Javier; Tresanchez, Marcel; Marco, Santiago; Palacín, Jordi

    2014-01-01

    This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile. PMID:24681671

  16. Obstacle avoidance system with sonar sensing and fuzzy logic

    NASA Astrophysics Data System (ADS)

    Chiang, Wen-chuan; Kelkar, Nikhal; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of an obstacle avoidance system using sonar sensors for a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. The obstacle avoidance system is based on a micro-controller interfaced with multiple ultrasonic transducers. This micro-controller independently handles all timing and distance calculations and sends a distance measurement back to the computer via the serial line. This design yields a portable independent system. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous obstacle avoidance controller applicable for any mobile vehicle with only minor adaptations.

  17. Development of a mobile robot for the 1995 AUVS competition

    NASA Astrophysics Data System (ADS)

    Matthews, Bradley O.; Ruthemeyer, Michael A.; Perdue, David; Hall, Ernest L.

    1995-12-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors systems. The speed and steering control are supervised by a 486 computer through a 3-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. The is micro-controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system, where even computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected through a commercial tracking device, communicating with the computer the X,Y coordinates of the lane marker. Testing of these systems yielded positive results by showing that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous controller applicable for any mobile vehicle with only minor adaptations.

  18. Path optimisation of a mobile robot using an artificial neural network controller

    NASA Astrophysics Data System (ADS)

    Singh, M. K.; Parhi, D. R.

    2011-01-01

    This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.

  19. A Tree Based Self-routing Scheme for Mobility Support in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kim, Young-Duk; Yang, Yeon-Mo; Kang, Won-Seok; Kim, Jin-Wook; An, Jinung

    Recently, WSNs (Wireless Sensor Networks) with mobile robot is a growing technology that offer efficient communication services for anytime and anywhere applications. However, the tiny sensor node has very limited network resources due to its low battery power, low data rate, node mobility, and channel interference constraint between neighbors. Thus, in this paper, we proposed a tree based self-routing protocol for autonomous mobile robots based on beacon mode and implemented in real test-bed environments. The proposed scheme offers beacon based real-time scheduling for reliable association process between parent and child nodes. In addition, it supports smooth handover procedure by reducing flooding overhead of control packets. Throughout the performance evaluation by using a real test-bed system and simulation, we illustrate that our proposed scheme demonstrates promising performance for wireless sensor networks with mobile robots.

  20. A Face Attention Technique for a Robot Able to Interpret Facial Expressions

    NASA Astrophysics Data System (ADS)

    Simplício, Carlos; Prado, José; Dias, Jorge

    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.

  1. Control of a free-flying robot manipulator system

    NASA Technical Reports Server (NTRS)

    Alexander, H.

    1986-01-01

    The development of and test control strategies for self-contained, autonomous free flying space robots are discussed. Such a robot would perform operations in space similar to those currently handled by astronauts during extravehicular activity (EVA). Use of robots should reduce the expense and danger attending EVA both by providing assistance to astronauts and in many cases by eliminating altogether the need for human EVA, thus greatly enhancing the scope and flexibility of space assembly and repair activities. The focus of the work is to develop and carry out a program of research with a series of physical Satellite Robot Simulator Vehicles (SRSV's), two-dimensionally freely mobile laboratory models of autonomous free-flying space robots such as might perform extravehicular functions associated with operation of a space station or repair of orbiting satellites. It is planned, in a later phase, to extend the research to three dimensions by carrying out experiments in the Space Shuttle cargo bay.

  2. Intelligent control and adaptive systems; Proceedings of the Meeting, Philadelphia, PA, Nov. 7, 8, 1989

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor)

    1990-01-01

    Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.

  3. Engineering Sensorial Delay to Control Phototaxis and Emergent Collective Behaviors

    NASA Astrophysics Data System (ADS)

    Mijalkov, Mite; McDaniel, Austin; Wehr, Jan; Volpe, Giovanni

    2016-01-01

    Collective motions emerging from the interaction of autonomous mobile individuals play a key role in many phenomena, from the growth of bacterial colonies to the coordination of robotic swarms. For these collective behaviors to take hold, the individuals must be able to emit, sense, and react to signals. When dealing with simple organisms and robots, these signals are necessarily very elementary; e.g., a cell might signal its presence by releasing chemicals and a robot by shining light. An additional challenge arises because the motion of the individuals is often noisy; e.g., the orientation of cells can be altered by Brownian motion and that of robots by an uneven terrain. Therefore, the emphasis is on achieving complex and tunable behaviors from simple autonomous agents communicating with each other in robust ways. Here, we show that the delay between sensing and reacting to a signal can determine the individual and collective long-term behavior of autonomous agents whose motion is intrinsically noisy. We experimentally demonstrate that the collective behavior of a group of phototactic robots capable of emitting a radially decaying light field can be tuned from segregation to aggregation and clustering by controlling the delay with which they change their propulsion speed in response to the light intensity they measure. We track this transition to the underlying dynamics of this system, in particular, to the ratio between the robots' sensorial delay time and the characteristic time of the robots' random reorientation. Supported by numerics, we discuss how the same mechanism can be applied to control active agents, e.g., airborne drones, moving in a three-dimensional space. Given the simplicity of this mechanism, the engineering of sensorial delay provides a potentially powerful tool to engineer and dynamically tune the behavior of large ensembles of autonomous mobile agents; furthermore, this mechanism might already be at work within living organisms such as chemotactic cells.

  4. Human guidance of mobile robots in complex 3D environments using smart glasses

    NASA Astrophysics Data System (ADS)

    Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel

    2016-05-01

    In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.

  5. Vision-based mapping with cooperative robots

    NASA Astrophysics Data System (ADS)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  6. Computational Mobility: An Overview

    NASA Technical Reports Server (NTRS)

    Suri, Niranjan

    2005-01-01

    This viewgraph presentation describes a framework for the autonomous control of robot swarms, which negotiate with each other, delegate authority to their peers, and cooperate in teams to accomplish tasks.

  7. Socially assistive robotics for post-stroke rehabilitation

    PubMed Central

    Matarić, Maja J; Eriksson, Jon; Feil-Seifer, David J; Winstein, Carolee J

    2007-01-01

    Background Although there is a great deal of success in rehabilitative robotics applied to patient recovery post stroke, most of the research to date has dealt with providing physical assistance. However, new rehabilitation studies support the theory that not all therapy need be hands-on. We describe a new area, called socially assistive robotics, that focuses on non-contact patient/user assistance. We demonstrate the approach with an implemented and tested post-stroke recovery robot and discuss its potential for effectiveness. Results We describe a pilot study involving an autonomous assistive mobile robot that aids stroke patient rehabilitation by providing monitoring, encouragement, and reminders. The robot navigates autonomously, monitors the patient's arm activity, and helps the patient remember to follow a rehabilitation program. We also show preliminary results from a follow-up study that focused on the role of robot physical embodiment in a rehabilitation context. Conclusion We outline and discuss future experimental designs and factors toward the development of effective socially assistive post-stroke rehabilitation robots. PMID:17309795

  8. Mobile app for human-interaction with sitter robots

    NASA Astrophysics Data System (ADS)

    Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.

    2017-05-01

    Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that enable a patient sitter HMI, and we include experimental results with a small number of users that demonstrate that the concept is sound and scalable.

  9. Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements

    PubMed Central

    Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962

  10. Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.

    PubMed

    Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.

  11. Constructing a Real-Time Mobile Robot Software System

    DTIC Science & Technology

    1994-09-01

    forces to rely more on automation to fill the gap of reduced personnel and equipment. One key element to this move to more automation, is autonomous ... vehicles . These vehicles will continue to play a greater role in this nation’s defense. At the Naval Postgraduate School (NPS), the Yamabico robot is an

  12. Making Sense by Building Sense: Kindergarten Children's Construction and Understanding of Adaptive Robot Behaviors

    ERIC Educational Resources Information Center

    Mioduser, David; Levy, Sharona T.

    2010-01-01

    This study explores young children's ability to construct and explain adaptive behaviors of a behaving artifact, an autonomous mobile robot with sensors. A central component of the behavior construction environment is the RoboGan software that supports children's construction of spatiotemporal events with an a-temporal rule structure. Six…

  13. Robot navigation research using the HERMIES mobile robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, D.L.

    1989-01-01

    In recent years robot navigation has attracted much attention from researchers around the world. Not only are theoretical studies being simulated on sophisticated computers, but many mobile robots are now used as test vehicles for these theoretical studies. Various algorithms have been perfected for navigation in a known static environment; but navigation in an unknown and dynamic environment poses a much more challenging problem for researchers. Many different methodologies have been developed for autonomous robot navigation, but each methodology is usually restricted to a particular type of environment. One important research focus of the Center for Engineering Systems Advanced researchmore » (CESAR) at Oak Ridge National Laboratory, is autonomous navigation in unknown and dynamic environments using the series of HERMIES mobile robots. The research uses an expert system for high-level planning interfaced with C-coded routines for implementing the plans, and for quick processing of data requested by the expert system. In using this approach, the navigation is not restricted to one methodology since the expert system can activate a rule module for the methodology best suited for the current situation. Rule modules can be added the rule base as they are developed and tested. Modules are being developed or enhanced for navigating from a map, searching for a target, exploring, artificial potential-field navigation, navigation using edge-detection, etc. This paper will report on the various rule modules and methods of navigation in use, or under development at CESAR, using the HERMIES-IIB robot as a testbed. 13 refs., 5 figs., 1 tab.« less

  14. A Petri-net coordination model for an intelligent mobile robot

    NASA Technical Reports Server (NTRS)

    Wang, F.-Y.; Kyriakopoulos, K. J.; Tsolkas, A.; Saridis, G. N.

    1990-01-01

    The authors present a Petri net model of the coordination level of an intelligent mobile robot system (IMRS). The purpose of this model is to specify the integration of the individual efforts on path planning, supervisory motion control, and vision systems that are necessary for the autonomous operation of the mobile robot in a structured dynamic environment. This is achieved by analytically modeling the various units of the system as Petri net transducers and explicitly representing the task precedence and information dependence among them. The model can also be used to simulate the task processing and to evaluate the efficiency of operations and the responsibility of decisions in the coordination level of the IMRS. Some simulation results on the task processing and learning are presented.

  15. Meeting the challenges of installing a mobile robotic system

    NASA Technical Reports Server (NTRS)

    Decorte, Celeste

    1994-01-01

    The challenges of integrating a mobile robotic system into an application environment are many. Most problems inherent to installing the mobile robotic system fall into one of three categories: (1) the physical environment - location(s) where, and conditions under which, the mobile robotic system will work; (2) the technological environment - external equipment with which the mobile robotic system will interact; and (3) the human environment - personnel who will operate and interact with the mobile robotic system. The successful integration of a mobile robotic system into these three types of application environment requires more than a good pair of pliers. The tools for this job include: careful planning, accurate measurement data (as-built drawings), complete technical data of systems to be interfaced, sufficient time and attention of key personnel for training on how to operate and program the robot, on-site access during installation, and a thorough understanding and appreciation - by all concerned - of the mobile robotic system's role in the security mission at the site, as well as the machine's capabilities and limitations. Patience, luck, and a sense of humor are also useful tools to keep handy during a mobile robotic system installation. This paper will discuss some specific examples of problems in each of three categories, and explore approaches to solving these problems. The discussion will draw from the author's experience with on-site installations of mobile robotic systems in various applications. Most of the information discussed in this paper has come directly from knowledge learned during installations of Cybermotion's SR2 security robots. A large part of the discussion will apply to any vehicle with a drive system, collision avoidance, and navigation sensors, which is, of course, what makes a vehicle autonomous. And it is with these sensors and a drive system that the installer must become familiar in order to foresee potential trouble areas in the physical, technical, and human environment.

  16. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-12-26

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  17. Autonomous undulatory serpentine locomotion utilizing body dynamics of a fluidic soft robot.

    PubMed

    Onal, Cagdas D; Rus, Daniela

    2013-06-01

    Soft robotics offers the unique promise of creating inherently safe and adaptive systems. These systems bring man-made machines closer to the natural capabilities of biological systems. An important requirement to enable self-contained soft mobile robots is an on-board power source. In this paper, we present an approach to create a bio-inspired soft robotic snake that can undulate in a similar way to its biological counterpart using pressure for actuation power, without human intervention. With this approach, we develop an autonomous soft snake robot with on-board actuation, power, computation and control capabilities. The robot consists of four bidirectional fluidic elastomer actuators in series to create a traveling curvature wave from head to tail along its body. Passive wheels between segments generate the necessary frictional anisotropy for forward locomotion. It takes 14 h to build the soft robotic snake, which can attain an average locomotion speed of 19 mm s(-1).

  18. Structure Assembly by a Heterogeneous Team of Robots Using State Estimation, Generalized Joints, and Mobile Parallel Manipulators

    NASA Technical Reports Server (NTRS)

    Komendera, Erik E.; Adhikari, Shaurav; Glassner, Samantha; Kishen, Ashwin; Quartaro, Amy

    2017-01-01

    Autonomous robotic assembly by mobile field robots has seen significant advances in recent decades, yet practicality remains elusive. Identified challenges include better use of state estimation to and reasoning with uncertainty, spreading out tasks to specialized robots, and implementing representative joining methods. This paper proposes replacing 1) self-correcting mechanical linkages with generalized joints for improved applicability, 2) assembly serial manipulators with parallel manipulators for higher precision and stability, and 3) all-in-one robots with a heterogeneous team of specialized robots for agent simplicity. This paper then describes a general assembly algorithm utilizing state estimation. Finally, these concepts are tested in the context of solar array assembly, requiring a team of robots to assemble, bond, and deploy a set of solar panel mockups to a backbone truss to an accuracy not built into the parts. This paper presents the results of these tests.

  19. HERMIES-3: A step toward autonomous mobility, manipulation, and perception

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.

    1989-01-01

    HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.

  20. Robust Agent Control of an Autonomous Robot with Many Sensors and Actuators

    DTIC Science & Technology

    1993-05-01

    Overview 22 3.1 Issues of Controller Design ........................ 22 3.2 Robot Behavior Control Philosophy .................. 23 3.3 Overview of the... designed and built by our lab as an 9 Figure 1.1- Hannibal. 10 experimental platform to explore planetary micro-rover control issues (Angle 1991). When... designing the robot, careful consideration was given to mobility, sensing, and robustness issues. Much has been said concerning the advan- tages of

  1. A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II

    NASA Astrophysics Data System (ADS)

    Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki

    The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.

  2. A traffic priority language for collision-free navigation of autonomous mobile robots in dynamic environments.

    PubMed

    Bourbakis, N G

    1997-01-01

    This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.

  3. Training a Network of Electronic Neurons for Control of a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Vromen, T. G. M.; Steur, E.; Nijmeijer, H.

    An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.

  4. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  5. Mobile robot knowledge base

    NASA Astrophysics Data System (ADS)

    Heath Pastore, Tracy; Barnes, Mitchell; Hallman, Rory

    2005-05-01

    Robot technology is developing at a rapid rate for both commercial and Department of Defense (DOD) applications. As a result, the task of managing both technology and experience information is growing. In the not-to-distant past, tracking development efforts of robot platforms, subsystems and components was not too difficult, expensive, or time consuming. To do the same today is a significant undertaking. The Mobile Robot Knowledge Base (MRKB) provides the robotics community with a web-accessible, centralized resource for sharing information, experience, and technology to more efficiently and effectively meet the needs of the robot system user. The resource includes searchable information on robot components, subsystems, mission payloads, platforms, and DOD robotics programs. In addition, the MRKB website provides a forum for technology and information transfer within the DOD robotics community and an interface for the Robotic Systems Pool (RSP). The RSP manages a collection of small teleoperated and semi-autonomous robotic platforms, available for loan to DOD and other qualified entities. The objective is to put robots in the hands of users and use the test data and fielding experience to improve robot systems.

  6. A locust-inspired miniature jumping robot.

    PubMed

    Zaitsev, Valentin; Gvirsman, Omer; Ben Hanan, Uri; Weiss, Avi; Ayali, Amir; Kosa, Gabor

    2015-11-25

    Unmanned ground vehicles are mostly wheeled, tracked, or legged. These locomotion mechanisms have a limited ability to traverse rough terrain and obstacles that are higher than the robot's center of mass. In order to improve the mobility of small robots it is necessary to expand the variety of their motion gaits. Jumping is one of nature's solutions to the challenge of mobility in difficult terrain. The desert locust is the model for the presented bio-inspired design of a jumping mechanism for a small mobile robot. The basic mechanism is similar to that of the semilunar process in the hind legs of the locust, and is based on the cocking of a torsional spring by wrapping a tendon-like wire around the shaft of a miniature motor. In this study we present the jumping mechanism design, and the manufacturing and performance analysis of two demonstrator prototypes. The most advanced jumping robot demonstrator is power autonomous, weighs 23 gr, and is capable of jumping to a height of 3.35 m, covering a distance of 1.37 m.

  7. The real-time learning mechanism of the Scientific Research Associates Advanced Robotic System (SRAARS)

    NASA Technical Reports Server (NTRS)

    Chen, Alexander Y.

    1990-01-01

    Scientific research associates advanced robotic system (SRAARS) is an intelligent robotic system which has autonomous learning capability in geometric reasoning. The system is equipped with one global intelligence center (GIC) and eight local intelligence centers (LICs). It controls mainly sixteen links with fourteen active joints, which constitute two articulated arms, an extensible lower body, a vision system with two CCD cameras and a mobile base. The on-board knowledge-based system supports the learning controller with model representations of both the robot and the working environment. By consecutive verifying and planning procedures, hypothesis-and-test routines and learning-by-analogy paradigm, the system would autonomously build up its own understanding of the relationship between itself (i.e., the robot) and the focused environment for the purposes of collision avoidance, motion analysis and object manipulation. The intelligence of SRAARS presents a valuable technical advantage to implement robotic systems for space exploration and space station operations.

  8. Development of dog-like retrieving capability in a ground robot

    NASA Astrophysics Data System (ADS)

    MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary

    2013-01-01

    This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.

  9. Optimal path planning for a mobile robot using cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Mohanty, Prases K.; Parhi, Dayal R.

    2016-03-01

    The shortest/optimal path planning is essential for efficient operation of autonomous vehicles. In this article, a new nature-inspired meta-heuristic algorithm has been applied for mobile robot path planning in an unknown or partially known environment populated by a variety of static obstacles. This meta-heuristic algorithm is based on the levy flight behaviour and brood parasitic behaviour of cuckoos. A new objective function has been formulated between the robots and the target and obstacles, which satisfied the conditions of obstacle avoidance and target-seeking behaviour of robots present in the terrain. Depending upon the objective function value of each nest (cuckoo) in the swarm, the robot avoids obstacles and proceeds towards the target. The smooth optimal trajectory is framed with this algorithm when the robot reaches its goal. Some simulation and experimental results are presented at the end of the paper to show the effectiveness of the proposed navigational controller.

  10. Lightweight robotic mobility: template-based modeling for dynamics and controls using ADAMS/car and MATLAB

    NASA Astrophysics Data System (ADS)

    Adamczyk, Peter G.; Gorsich, David J.; Hudas, Greg R.; Overholt, James

    2003-09-01

    The U.S. Army is seeking to develop autonomous off-road mobile robots to perform tasks in the field such as supply delivery and reconnaissance in dangerous territory. A key problem to be solved with these robots is off-road mobility, to ensure that the robots can accomplish their tasks without loss or damage. We have developed a computer model of one such concept robot, the small-scale "T-1" omnidirectional vehicle (ODV), to study the effects of different control strategies on the robot's mobility in off-road settings. We built the dynamic model in ADAMS/Car and the control system in Matlab/Simulink. This paper presents the template-based method used to construct the ADAMS model of the T-1 ODV. It discusses the strengths and weaknesses of ADAMS/Car software in such an application, and describes the benefits and challenges of the approach as a whole. The paper also addresses effective linking of ADAMS/Car and Matlab for complete control system development. Finally, this paper includes a section describing the extension of the T-1 templates to other similar ODV concepts for rapid development.

  11. Sign detection for autonomous navigation

    NASA Astrophysics Data System (ADS)

    Goodsell, Thomas G.; Snorrason, Magnus S.; Cartwright, Dustin; Stube, Brian; Stevens, Mark R.; Ablavsky, Vitaly X.

    2003-09-01

    Mobile robots currently cannot detect and read arbitrary signs. This is a major hindrance to mobile robot usability, since they cannot be tasked using directions that are intuitive to humans. It also limits their ability to report their position relative to intuitive landmarks. Other researchers have demonstrated some success on traffic sign recognition, but using template based methods limits the set of recognizable signs. There is a clear need for a sign detection and recognition system that can process a much wider variety of signs: traffic signs, street signs, store-name signs, building directories, room signs, etc. We are developing a system for Sign Understanding in Support of Autonomous Navigation (SUSAN), that detects signs from various cues common to most signs: vivid colors, compact shape, and text. We have demonstrated the feasibility of our approach on a variety of signs in both indoor and outdoor locations.

  12. Large-scale deep learning for robotically gathered imagery for science

    NASA Astrophysics Data System (ADS)

    Skinner, K.; Johnson-Roberson, M.; Li, J.; Iscar, E.

    2016-12-01

    With the explosion of computing power, the intelligence and capability of mobile robotics has dramatically increased over the last two decades. Today, we can deploy autonomous robots to achieve observations in a variety of environments ripe for scientific exploration. These platforms are capable of gathering a volume of data previously unimaginable. Additionally, optical cameras, driven by mobile phones and consumer photography, have rapidly improved in size, power consumption, and quality making their deployment cheaper and easier. Finally, in parallel we have seen the rise of large-scale machine learning approaches, particularly deep neural networks (DNNs), increasing the quality of the semantic understanding that can be automatically extracted from optical imagery. In concert this enables new science using a combination of machine learning and robotics. This work will discuss the application of new low-cost high-performance computing approaches and the associated software frameworks to enable scientists to rapidly extract useful science data from millions of robotically gathered images. The automated analysis of imagery on this scale opens up new avenues of inquiry unavailable using more traditional manual or semi-automated approaches. We will use a large archive of millions of benthic images gathered with an autonomous underwater vehicle to demonstrate how these tools enable new scientific questions to be posed.

  13. Investigating the Mobility of Light Autonomous Tracked Vehicles using a High Performance Computing Simulation Capability

    NASA Technical Reports Server (NTRS)

    Negrut, Dan; Mazhar, Hammad; Melanz, Daniel; Lamb, David; Jayakumar, Paramsothy; Letherwood, Michael; Jain, Abhinandan; Quadrelli, Marco

    2012-01-01

    This paper is concerned with the physics-based simulation of light tracked vehicles operating on rough deformable terrain. The focus is on small autonomous vehicles, which weigh less than 100 lb and move on deformable and rough terrain that is feature rich and no longer representable using a continuum approach. A scenario of interest is, for instance, the simulation of a reconnaissance mission for a high mobility lightweight robot where objects such as a boulder or a ditch that could otherwise be considered small for a truck or tank, become major obstacles that can impede the mobility of the light autonomous vehicle and negatively impact the success of its mission. Analyzing and gauging the mobility and performance of these light vehicles is accomplished through a modeling and simulation capability called Chrono::Engine. Chrono::Engine relies on parallel execution on Graphics Processing Unit (GPU) cards.

  14. Real-Time Modeling of Cross-Body Flow for Torpedo Tube Recovery of the Phoenix Autonomous Underwater Vehicle (AUV)

    DTIC Science & Technology

    1998-03-01

    34Numerical Recipes in C," second edition, Cambridge University Press, Cambridge England, 1992. Marco, David , "Autonomous Control of Underwater...in the viewer. -202- LIST OF REFERENCES Ames, Andrea L., Nadeau, David R., Moreland, John L., VRML 2.0 Sourcebook, Second edition, John Wiley...McGhee, Bob, "The Phoenix Autonomous Underwater Vehicle," AI-Based Mobile Robots, editors David Kortenkamp, Pete Bonasso and Robin Murphy, MJT/AAAI

  15. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.

  16. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.

  17. Recent CESAR (Center for Engineering Systems Advanced Research) research activities in sensor based reasoning for autonomous machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; de Saussure, G.; Spelt, P.F.

    1988-01-01

    This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioningmore » of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.« less

  18. Robot map building based on fuzzy-extending DSmT

    NASA Astrophysics Data System (ADS)

    Li, Xinde; Huang, Xinhan; Wu, Zuyu; Peng, Gang; Wang, Min; Xiong, Youlun

    2007-11-01

    With the extensive application of mobile robots in many different fields, map building in unknown environments has been one of the principal issues in the field of intelligent mobile robot. However, Information acquired in map building presents characteristics of uncertainty, imprecision and even high conflict, especially in the course of building grid map using sonar sensors. In this paper, we extended DSmT with Fuzzy theory by considering the different fuzzy T-norm operators (such as Algebraic Product operator, Bounded Product operator, Einstein Product operator and Default minimum operator), in order to develop a more general and flexible combinational rule for more extensive application. At the same time, we apply fuzzy-extended DSmT to mobile robot map building with the help of new self-localization method based on neighboring field appearance matching( -NFAM), to make the new tool more robust in very complex environment. An experiment is conducted to reconstruct the map with the new tool in indoor environment, in order to compare their performances in map building with four T-norm operators, when Pioneer II mobile robot runs along the same trace. Finally, a conclusion is reached that this study develops a new idea to extend DSmT, also provides a new approach for autonomous navigation of mobile robot, and provides a human-computer interactive interface to manage and manipulate the robot remotely.

  19. Introduction to autonomous mobile robotics using Lego Mindstorms NXT

    NASA Astrophysics Data System (ADS)

    Akın, H. Levent; Meriçli, Çetin; Meriçli, Tekin

    2013-12-01

    Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the Lego Mindstorms NXT kits are used as the robot platform. The aims, scope and contents of the course are presented, and the design of the laboratory sessions as well as the term projects, which address several core problems of robotics and artificial intelligence simultaneously, are explained in detail.

  20. Controlling multiple security robots in a warehouse environment

    NASA Technical Reports Server (NTRS)

    Everett, H. R.; Gilbreath, G. A.; Heath-Pastore, T. A.; Laird, R. T.

    1994-01-01

    The Naval Command Control and Ocean Surveillance Center (NCCOSC) has developed an architecture to provide coordinated control of multiple autonomous vehicles from a single host console. The multiple robot host architecture (MRHA) is a distributed multiprocessing system that can be expanded to accommodate as many as 32 robots. The initial application will employ eight Cybermotion K2A Navmaster robots configured as remote security platforms in support of the Mobile Detection Assessment and Response System (MDARS) Program. This paper discusses developmental testing of the MRHA in an operational warehouse environment, with two actual and four simulated robotic platforms.

  1. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    NASA Astrophysics Data System (ADS)

    Hanford, Scott D.

    Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.

  2. Autonomous Robotic Inspection in Tunnels

    NASA Astrophysics Data System (ADS)

    Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.

    2016-06-01

    In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.

  3. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    PubMed Central

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  4. Two-Armed, Mobile, Sensate Research Robot

    NASA Technical Reports Server (NTRS)

    Engelberger, J. F.; Roberts, W. Nelson; Ryan, David J.; Silverthorne, Andrew

    2004-01-01

    The Anthropomorphic Robotic Testbed (ART) is an experimental prototype of a partly anthropomorphic, humanoid-size, mobile robot. The basic ART design concept provides for a combination of two-armed coordination, tactility, stereoscopic vision, mobility with navigation and avoidance of obstacles, and natural-language communication, so that the ART could emulate humans in many activities. The ART could be developed into a variety of highly capable robotic assistants for general or specific applications. There is especially great potential for the development of ART-based robots as substitutes for live-in health-care aides for home-bound persons who are aged, infirm, or physically handicapped; these robots could greatly reduce the cost of home health care and extend the term of independent living. The ART is a fully autonomous and untethered system. It includes a mobile base on which is mounted an extensible torso topped by a head, shoulders, and two arms. All subsystems of the ART are powered by a rechargeable, removable battery pack. The mobile base is a differentially- driven, nonholonomic vehicle capable of a speed >1 m/s and can handle a payload >100 kg. The base can be controlled manually, in forward/backward and/or simultaneous rotational motion, by use of a joystick. Alternatively, the motion of the base can be controlled autonomously by an onboard navigational computer. By retraction or extension of the torso, the head height of the ART can be adjusted from 5 ft (1.5 m) to 6 1/2 ft (2 m), so that the arms can reach either the floor or high shelves, or some ceilings. The arms are symmetrical. Each arm (including the wrist) has a total of six rotary axes like those of the human shoulder, elbow, and wrist joints. The arms are actuated by electric motors in combination with brakes and gas-spring assists on the shoulder and elbow joints. The arms are operated under closed-loop digital control. A receptacle for an end effector is mounted on the tip of the wrist and contains a force-and-torque sensor that provides feedback for force (compliance) control of the arm. The end effector could be a tool or a robot hand, depending on the application.

  5. Agile and dexterous robot for inspection and EOD operations

    NASA Astrophysics Data System (ADS)

    Handelman, David A.; Franken, Gordon H.; Komsuoglu, Haldun

    2010-04-01

    The All-Terrain Biped (ATB) robot is an unmanned ground vehicle with arms, legs and wheels designed to drive, crawl, walk and manipulate objects for inspection and explosive ordnance disposal tasks. This paper summarizes on-going development of the ATB platform. Control technology for semi-autonomous legged mobility and dual-arm dexterity is described as well as preliminary simulation and hardware test results. Performance goals include driving on flat terrain, crawling on steep terrain, walking on stairs, opening doors and grasping objects. Anticipated benefits of the adaptive mobility and dexterity of the ATB platform include increased robot agility and autonomy for EOD operations, reduced operator workload and reduced operator training and skill requirements.

  6. Cooperative path following control of multiple nonholonomic mobile robots.

    PubMed

    Cao, Ke-Cai; Jiang, Bin; Yue, Dong

    2017-11-01

    Cooperative path following control problem of multiple nonholonomic mobile robots has been considered in this paper. Based on the framework of decomposition, the cooperative path following problem has been transformed into path following problem and cooperative control problem; Then cascaded theory of non-autonomous system has been employed in the design of controllers without resorting to feedback linearization. One time-varying coordinate transformation based on dilation has been introduced to solve the uncontrollable problem of nonholonomic robots when the whole group's reference converges to stationary point. Cooperative path following controllers for nonholonomic robots have been proposed under persistent reference or reference target that converges to stationary point respectively. Simulation results using Matlab have illustrated the effectiveness of the obtained theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Scene analysis for a breadboard Mars robot functioning in an indoor environment

    NASA Technical Reports Server (NTRS)

    Levine, M. D.

    1973-01-01

    The problem is delt with of computer perception in an indoor laboratory environment containing rocks of various sizes. The sensory data processing is required for the NASA/JPL breadboard mobile robot that is a test system for an adaptive variably-autonomous vehicle that will conduct scientific explorations on the surface of Mars. Scene analysis is discussed in terms of object segmentation followed by feature extraction, which results in a representation of the scene in the robot's world model.

  8. Mobile robot exploration and navigation of indoor spaces using sonar and vision

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David; Huber, Marcus; Koss, Frank; Belding, William; Lee, Jaeho; Wu, Annie; Bidlack, Clint; Rodgers, Seth

    1994-01-01

    Integration of skills into an autonomous robot that performs a complex task is described. Time constraints prevented complete integration of all the described skills. The biggest problem was tuning the sensor-based region-finding algorithm to the environment involved. Since localization depended on matching regions found with the a priori map, the robot became lost very quickly. If the low level sensing of the world is not working, then high level reasoning or map making will be unsuccessful.

  9. Task-level control for autonomous robots

    NASA Technical Reports Server (NTRS)

    Simmons, Reid

    1994-01-01

    Task-level control refers to the integration and coordination of planning, perception, and real-time control to achieve given high-level goals. Autonomous mobile robots need task-level control to effectively achieve complex tasks in uncertain, dynamic environments. This paper describes the Task Control Architecture (TCA), an implemented system that provides commonly needed constructs for task-level control. Facilities provided by TCA include distributed communication, task decomposition and sequencing, resource management, monitoring and exception handling. TCA supports a design methodology in which robot systems are developed incrementally, starting first with deliberative plans that work in nominal situations, and then layering them with reactive behaviors that monitor plan execution and handle exceptions. To further support this approach, design and analysis tools are under development to provide ways of graphically viewing the system and validating its behavior.

  10. Development and training of a learning expert system in an autonomous mobile robot via simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spelt, P.F.; Lyness, E.; DeSaussure, G.

    1989-11-01

    The Center for Engineering Systems Advanced Research (CESAR) conducts basic research in the area of intelligent machines. Recently at CESAR a learning expert system was created to operate on board an autonomous robot working at a process control panel. The authors discuss two-computer simulation system used to create, evaluate and train this learning system. The simulation system has a graphics display of the current status of the process being simulated, and the same program which does the simulating also drives the actual control panel. Simulation results were validated on the actual robot. The speed and safety values of using amore » computerized simulator to train a learning computer, and future uses of the simulation system, are discussed.« less

  11. Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation

    PubMed Central

    Masmoudi, Mohamed Slim; Masmoudi, Mohamed

    2016-01-01

    This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path. PMID:27688748

  12. Estimating time available for sensor fusion exception handling

    NASA Astrophysics Data System (ADS)

    Murphy, Robin R.; Rogers, Erika

    1995-09-01

    In previous work, we have developed a generate, test, and debug methodology for detecting, classifying, and responding to sensing failures in autonomous and semi-autonomous mobile robots. An important issue has arisen from these efforts: how much time is there available to classify the cause of the failure and determine an alternative sensing strategy before the robot mission must be terminated? In this paper, we consider the impact of time for teleoperation applications where a remote robot attempts to autonomously maintain sensing in the presence of failures yet has the option to contact the local for further assistance. Time limits are determined by using evidential reasoning with a novel generalization of Dempster-Shafer theory. Generalized Dempster-Shafer theory is used to estimate the time remaining until the robot behavior must be suspended because of uncertainty; this becomes the time limit on autonomous exception handling at the remote. If the remote cannot complete exception handling in this time or needs assistance, responsibility is passed to the local, while the remote assumes a `safe' state. An intelligent assistant then facilitates human intervention, either directing the remote without human assistance or coordinating data collection and presentation to the operator within time limits imposed by the mission. The impact of time on exception handling activities is demonstrated using video camera sensor data.

  13. Mathematical Modeling Of The Terrain Around A Robot

    NASA Technical Reports Server (NTRS)

    Slack, Marc G.

    1992-01-01

    In conceptual system for modeling of terrain around autonomous mobile robot, representation of terrain used for control separated from representation provided by sensors. Concept takes motion-planning system out from under constraints imposed by discrete spatial intervals of square terrain grid(s). Separation allows sensing and motion-controlling systems to operate asynchronously; facilitating integration of new map and sensor data into planning of motions.

  14. Map generation in unknown environments by AUKF-SLAM using line segment-type and point-type landmarks

    NASA Astrophysics Data System (ADS)

    Nishihta, Sho; Maeyama, Shoichi; Watanebe, Keigo

    2018-02-01

    Recently, autonomous mobile robots that collect information at disaster sites are being developed. Since it is difficult to obtain maps in advance in disaster sites, the robots being capable of autonomous movement under unknown environments are required. For this objective, the robots have to build maps, as well as the estimation of self-location. This is called a SLAM problem. In particular, AUKF-SLAM which uses corners in the environment as point-type landmarks has been developed as a solution method so far. However, when the robots move in an environment like a corridor consisting of few point-type features, the accuracy of self-location estimated by the landmark is decreased and it causes some distortions in the map. In this research, we propose AUKF-SLAM which uses walls in the environment as a line segment-type landmark. We demonstrate that the robot can generate maps in unknown environment by AUKF-SLAM, using line segment-type and point-type landmarks.

  15. On autonomous terrain model acquistion by a mobile robot

    NASA Technical Reports Server (NTRS)

    Rao, N. S. V.; Iyengar, S. S.; Weisbin, C. R.

    1987-01-01

    The following problem is considered: A point robot is placed in a terrain populated by an unknown number of polyhedral obstacles of varied sizes and locations in two/three dimensions. The robot is equipped with a sensor capable of detecting all the obstacle vertices and edges that are visible from the present location of the robot. The robot is required to autonomously navigate and build the complete terrain model using the sensor information. It is established that the necessary number of scanning operations needed for complete terrain model acquisition by any algorithm that is based on scan from vertices strategy is given by the summation of i = 1 (sup n) N(O sub i)-n and summation of i = 1 (sup n) N(O sub i)-2n in two- and three-dimensional terrains respectively, where O = (O sub 1, O sub 2,....O sub n) set of the obstacles in the terrain, and N(O sub i) is the number of vertices of the obstacle O sub i.

  16. The VIPER project (Visualization Integration Platform for Exploration Research): a biologically inspired autonomous reconfigurable robotic platform for diverse unstructured environments

    NASA Astrophysics Data System (ADS)

    Schubert, Oliver J.; Tolle, Charles R.

    2004-09-01

    Over the last decade the world has seen numerous autonomous vehicle programs. Wheels and track designs are the basis for many of these vehicles. This is primarily due to four main reasons: a vast preexisting knowledge base for these designs, energy efficiency of power sources, scalability of actuators, and the lack of control systems technologies for handling alternate highly complex distributed systems. Though large efforts seek to improve the mobility of these vehicles, many limitations still exist for these systems within unstructured environments, e.g. limited mobility within industrial and nuclear accident sites where existing plant configurations have been extensively changed. These unstructured operational environments include missions for exploration, reconnaissance, and emergency recovery of objects within reconfigured or collapsed structures, e.g. bombed buildings. More importantly, these environments present a clear and present danger for direct human interactions during the initial phases of recovery operations. Clearly, the current classes of autonomous vehicles are incapable of performing in these environments. Thus the next generation of designs must include highly reconfigurable and flexible autonomous robotic platforms. This new breed of autonomous vehicles will be both highly flexible and environmentally adaptable. Presented in this paper is one of the most successful designs from nature, the snake-eel-worm (SEW). This design implements shape memory alloy (SMA) actuators which allow for scaling of the robotic SEW designs from sub-micron scale to heavy industrial implementations without major conceptual redesigns as required in traditional hydraulic, pneumatic, or motor driven systems. Autonomous vehicles based on the SEW design posses the ability to easily move between air based environments and fluid based environments with limited or no reconfiguration. Under a SEW designed vehicle, one not only achieves vastly improved maneuverability within a highly unstructured environment, but also gains robotic manipulation abilities, normally relegated as secondary add-ons within existing vehicles, all within one small condensed package. The prototype design presented includes a Beowulf style computing system for advanced guidance calculations and visualization computations. All of the design and implementation pertaining to the SEW robot discussed in this paper is the product of a student team under the summer fellowship program at the DOEs INEEL.

  17. Autonomous Rovers for Polar Science Campaigns

    NASA Astrophysics Data System (ADS)

    Lever, J. H.; Ray, L. E.; Williams, R. M.; Morlock, A. M.; Burzynski, A. M.

    2012-12-01

    We have developed and deployed two over-snow autonomous rovers able to conduct remote science campaigns on Polar ice sheets. Yeti is an 80-kg, four-wheel-drive (4WD) battery-powered robot with 3 - 4 hr endurance, and Cool Robot is a 60-kg 4WD solar-powered robot with unlimited endurance during Polar summers. Both robots navigate using GPS waypoint-following to execute pre-planned courses autonomously, and they can each carry or tow 20 - 160 kg instrument payloads over typically firm Polar snowfields. In 2008 - 12, we deployed Yeti to conduct autonomous ground-penetrating radar (GPR) surveys to detect hidden crevasses to help establish safe routes for overland resupply of research stations at South Pole, Antarctica, and Summit, Greenland. We also deployed Yeti with GPR at South Pole in 2011 to identify the locations of potentially hazardous buried buildings from the original 1950's-era station. Autonomous surveys remove personnel from safety risks posed during manual GPR surveys by undetected crevasses or buried buildings. Furthermore, autonomous surveys can yield higher quality and more comprehensive data than manual ones: Yeti's low ground pressure (20 kPa) allows it to cross thinly bridged crevasses or other voids without interrupting a survey, and well-defined survey grids allow repeated detection of buried voids to improve detection reliability and map their extent. To improve survey efficiency, we have automated the mapping of detected hazards, currently identified via post-survey manual review of the GPR data. Additionally, we are developing machine-learning algorithms to detect crevasses autonomously in real time, with reliability potentially higher than manual real-time detection. These algorithms will enable the rover to relay crevasse locations to a base station for near real-time mapping and decision-making. We deployed Cool Robot at Summit Station in 2005 to verify its mobility and power budget over Polar snowfields. Using solar power, this zero-emissions rover could travel more than 500 km per week during Polar summers and provide 100 - 200 W to power instrument payloads to help investigate the atmosphere, magnetosphere, glaciology and sub-glacial geology in Antarctica and Greenland. We are currently upgrading Cool Robot's navigation and solar-power systems and will deploy it during 2013 to map the emissions footprint around Summit Station to demonstrate its potential to execute long-endurance Polar science campaigns. These rovers could assist science traverses to chart safe routes into the interior of Antarctica and Greenland or conduct autonomous, remote science campaigns to extend spatial and temporal coverage for data collection. Our goals include 1,000 - 2,000-km summertime traverses of Antarctica and Greenland, safe navigation through 0.5-m amplitude sastrugi fields, survival in blizzards, and rover-network adaptation to research events of opportunity. We are seeking Polar scientists interested in autonomous, mobile data collection and can adapt the rovers to meet their requirements.

  18. A neural network-based exploratory learning and motor planning system for co-robots

    PubMed Central

    Galbraith, Byron V.; Guenther, Frank H.; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or “learning by doing,” an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object. PMID:26257640

  19. A neural network-based exploratory learning and motor planning system for co-robots.

    PubMed

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  20. Tracking Control of Mobile Robots Localized via Chained Fusion of Discrete and Continuous Epipolar Geometry, IMU and Odometry.

    PubMed

    Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas

    2013-08-01

    This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.

  1. Evaluation of a Mobile Platform for Proof-of-Concept Autonomous Site Selection and Preparation

    NASA Astrophysics Data System (ADS)

    Gammell, Jonathan

    A mobile robotic platform for Autonomous Site Selection and Preparation (ASSP) was developed for an analogue deployment to Mauna Kea, Hawai`i. A team of rovers performed an autonomous Ground Penetrating Radar (GPR) survey and constructed a level landing pad. They used interchangeable payloads that allowed the GPR and blade to be easily exchanged. Autonomy was accomplished by integrating the individual hardware devices with software based on the ArgoSoft framework previously developed at UTIAS. The rovers were controlled by an on-board netbook. The successes and failures of the devices and software modules are evaluated within. Recommendations are presented to address problems discovered during the deployment and to guide future research on the platform.

  2. Investigation of human-robot interface performance in household environments

    NASA Astrophysics Data System (ADS)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  3. Nonuniform Deployment of Autonomous Agents in Harbor-Like Environments

    DTIC Science & Technology

    2014-11-12

    ith agent than to all other agents. Interested readers are referred to [55] for the comprehensive study on Voronoi partitioning and its applications...robots: An rfid approach, PhD dissertation, School of Electrical Engi- neering and Computer Science, University of Ottawa (October 2012). [55] A. Okabe, B...Gueaieb, A stochastic approach of mobile robot navigation using customized rfid sys- tems, International Conference on Signals, Circuits and Systems

  4. Evaluating the Dynamics of Agent-Environment Interaction

    DTIC Science & Technology

    2001-05-01

    a color sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangulation system for...Cooperative Mobile Robot Control’, Autonomous Robots 4(4), 387{403. Vaughan, R. T., Sty, K., Sukhatme, G. S. & Mataric, M. J. (2000), Whistling in the Dark...sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangu- lation system for

  5. Distance-Based Behaviors for Low-Complexity Control in Multiagent Robotics

    NASA Astrophysics Data System (ADS)

    Pierpaoli, Pietro

    Several biological examples show that living organisms cooperate to collectively accomplish tasks impossible for single individuals. More importantly, this coordination is often achieved with a very limited set of information. Inspired by these observations, research on autonomous systems has focused on the development of distributed control techniques for control and guidance of groups of autonomous mobile agents, or robots. From an engineering perspective, when coordination and cooperation is sought in large ensembles of robotic vehicles, a reduction in hardware and algorithms' complexity becomes mandatory from the very early stages of the project design. The research for solutions capable of lowering power consumption, cost and increasing reliability are thus worth investigating. In this work, we studied low-complexity techniques to achieve cohesion and control on swarms of autonomous robots. Starting from an inspiring example with two-agents, we introduced effects of neighbors' relative positions on control of an autonomous agent. The extension of this intuition addressed the control of large ensembles of autonomous vehicles, and was applied in the form of a herding-like technique. To this end, a low-complexity distance-based aggregation protocol was defined. We first showed that our protocol produced a cohesion aggregation among the agent while avoiding inter-agent collisions. Then, a feedback leader-follower architecture was introduced for the control of the swarm. We also described how proximity measures and probability of collisions with neighbors can also be used as source of information in highly populated environments.

  6. Planning perception and action for cognitive mobile manipulators

    NASA Astrophysics Data System (ADS)

    Gaschler, Andre; Nogina, Svetlana; Petrick, Ronald P. A.; Knoll, Alois

    2013-12-01

    We present a general approach to perception and manipulation planning for cognitive mobile manipulators. Rather than hard-coding single purpose robot applications, a robot should be able to reason about its basic skills in order to solve complex problems autonomously. Humans intuitively solve tasks in real-world scenarios by breaking down abstract problems into smaller sub-tasks and use heuristics based on their previous experience. We apply a similar idea for planning perception and manipulation to cognitive mobile robots. Our approach is based on contingent planning and run-time sensing, integrated in our knowledge of volumes" planning framework, called KVP. Using the general-purpose PKS planner, we model information-gathering actions at plan time that have multiple possible outcomes at run time. As a result, perception and sensing arise as necessary preconditions for manipulation, rather than being hard-coded as tasks themselves. We demonstrate the e ectiveness of our approach on two scenarios covering visual and force sensing on a real mobile manipulator.

  7. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.

  8. Single-Command Approach and Instrument Placement by a Robot on a Target

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Cheng, Yang

    2005-01-01

    AUTOAPPROACH is a computer program that enables a mobile robot to approach a target autonomously, starting from a distance of as much as 10 m, in response to a single command. AUTOAPPROACH is used in conjunction with (1) software that analyzes images acquired by stereoscopic cameras aboard the robot and (2) navigation and path-planning software that utilizes odometer readings along with the output of the image-analysis software. Intended originally for application to an instrumented, wheeled robot (rover) in scientific exploration of Mars, AUTOAPPROACH could be adapted to terrestrial applications, notably including the robotic removal of land mines and other unexploded ordnance. A human operator generates the approach command by selecting the target in images acquired by the robot cameras. The approach path consists of multiple legs. Feature points are derived from images that contain the target and are thereafter tracked to correct odometric errors and iteratively refine estimates of the position and orientation of the robot relative to the target on successive legs. The approach is terminated when the robot attains the position and orientation required for placing a scientific instrument at the target. The workspace of the robot arm is then autonomously checked for self/terrain collisions prior to the deployment of the scientific instrument onto the target.

  9. Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance.

    PubMed

    Payá, Luis; Reinoso, Oscar; Jiménez, Luis M; Juliá, Miguel

    2017-01-01

    Along the past years, mobile robots have proliferated both in domestic and in industrial environments to solve some tasks such as cleaning, assistance, or material transportation. One of their advantages is the ability to operate in wide areas without the necessity of introducing changes into the existing infrastructure. Thanks to the sensors they may be equipped with and their processing systems, mobile robots constitute a versatile alternative to solve a wide range of applications. When designing the control system of a mobile robot so that it carries out a task autonomously in an unknown environment, it is expected to take decisions about its localization in the environment and about the trajectory that it has to follow in order to arrive to the target points. More concisely, the robot has to find a relatively good solution to two crucial problems: building a model of the environment, and estimating the position of the robot within this model. In this work, we propose a framework to solve these problems using only visual information. The mobile robot is equipped with a catadioptric vision sensor that provides omnidirectional images from the environment. First, the robot goes along the trajectories to include in the model and uses the visual information captured to build this model. After that, the robot is able to estimate its position and orientation with respect to the trajectory. Among the possible approaches to solve these problems, global appearance techniques are used in this work. They have emerged recently as a robust and efficient alternative compared to landmark extraction techniques. A global description method based on Radon Transform is used to design mapping and localization algorithms and a set of images captured by a mobile robot in a real environment, under realistic operation conditions, is used to test the performance of these algorithms.

  10. A 2D chaotic path planning for mobile robots accomplishing boundary surveillance missions in adversarial conditions

    NASA Astrophysics Data System (ADS)

    Curiac, Daniel-Ioan; Volosencu, Constantin

    2014-10-01

    The path-planning algorithm represents a crucial issue for every autonomous mobile robot. In normal circumstances a patrol robot will compute an optimal path to ensure its task accomplishment, but in adversarial conditions the problem is getting more complicated. Here, the robot’s trajectory needs to be altered into a misleading and unpredictable path to cope with potential opponents. Chaotic systems provide the needed framework for obtaining unpredictable motion in all of the three basic robot surveillance missions: area, points of interests and boundary monitoring. Proficient approaches have been provided for the first two surveillance tasks, but for boundary patrol missions no method has been reported yet. This paper addresses the mentioned research gap by proposing an efficient method, based on chaotic dynamic of the Hénon system, to ensure unpredictable boundary patrol on any shape of chosen closed contour.

  11. Creating a Mobile Autonomous Robot Research System (MARRS)

    DTIC Science & Technology

    1984-12-01

    Laboratory was made possible through the energetic support of many individuals and organizations. In particluar, we want to thank our thesis advisor Dr...subsystems. Computer Hardware Until a few years ago autonomous vehicles were unheard of in real life. The advent of the microcomputer has made fact...8217i.vjf/^vf.’ Most software development efforts for MARRS-1 took advantage of Virtual Devices Robo C compiler and Robo Assembler. The next best

  12. Toward autonomous driving: The CMU Navlab. II - Architecture and systems

    NASA Technical Reports Server (NTRS)

    Thorpe, Charles; Hebert, Martial; Kanade, Takeo; Shafer, Steven

    1991-01-01

    A description is given of EDDIE, the architecture for the Navlab mobile robot which provides a toolkit for building specific systems quickly and easily. Included in the discussion are the annotated maps used by EDDIE and the Navlab's road-following system, called the Autonomous Mail Vehicle, which was built using EDDIE and its annotated maps as a basis. The contributions of the Navlab project and the lessons learned from it are examined.

  13. A prototype home robot with an ambient facial interface to improve drug compliance.

    PubMed

    Takacs, Barnabas; Hanak, David

    2008-01-01

    We have developed a prototype home robot to improve drug compliance. The robot is a small mobile device, capable of autonomous behaviour, as well as remotely controlled operation via a wireless datalink. The robot is capable of face detection and also has a display screen to provide facial feedback to help motivate patients and thus increase their level of compliance. An RFID reader can identify tags attached to different objects, such as bottles, for fluid intake monitoring. A tablet dispenser allows drug compliance monitoring. Despite some limitations, experience with the prototype suggests that simple and low-cost robots may soon become feasible for care of people living alone or in isolation.

  14. Reasoning and planning in dynamic domains: An experiment with a mobile robot

    NASA Technical Reports Server (NTRS)

    Georgeff, M. P.; Lansky, A. L.; Schoppers, M. J.

    1987-01-01

    Progress made toward having an autonomous mobile robot reason and plan complex tasks in real-world environments is described. To cope with the dynamic and uncertain nature of the world, researchers use a highly reactive system to which is attributed attitudes of belief, desire, and intention. Because these attitudes are explicitly represented, they can be manipulated and reasoned about, resulting in complex goal-directed and reflective behaviors. Unlike most planning systems, the plans or intentions formed by the system need only be partly elaborated before it decides to act. This allows the system to avoid overly strong expectations about the environment, overly constrained plans of action, and other forms of over-commitment common to previous planners. In addition, the system is continuously reactive and has the ability to change its goals and intentions as situations warrant. Thus, while the system architecture allows for reasoning about means and ends in much the same way as traditional planners, it also posseses the reactivity required for survival in complex real-world domains. The system was tested using SRI's autonomous robot (Flakey) in a scenario involving navigation and the performance of an emergency task in a space station scenario.

  15. Distributing Planning and Control for Teams of Cooperating Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, L.E.

    2004-07-19

    This CRADA project involved the cooperative research of investigators in ORNL's Center for Engineering Science Advanced Research (CESAR) with researchers at Caterpillar, Inc. The subject of the research was the development of cooperative control strategies for autonomous vehicles performing applications of interest to Caterpillar customers. The project involved three Phases of research, conducted over the time period of November 1998 through December 2001. This project led to the successful development of several technologies and demonstrations in realistic simulation that illustrated the effectiveness of our control approaches for distributed planning and cooperation in multi-robot teams. The primary objectives of this researchmore » project were to: (1) Develop autonomous control technologies to enable multiple vehicles to work together cooperatively, (2) Provide the foundational capabilities for a human operator to exercise oversight and guidance during the multi-vehicle task execution, and (3) Integrate these capabilities to the ALLIANCE-based autonomous control approach for multi-robot teams. These objectives have been successfully met with the results implemented and demonstrated in a near real-time multi-vehicle simulation of up to four vehicles performing mission-relevant tasks.« less

  16. Mobile robot navigation modulated by artificial emotions.

    PubMed

    Lee-Johnson, C P; Carnegie, D A

    2010-04-01

    For artificial intelligence research to progress beyond the highly specialized task-dependent implementations achievable today, researchers may need to incorporate aspects of biological behavior that have not traditionally been associated with intelligence. Affective processes such as emotions may be crucial to the generalized intelligence possessed by humans and animals. A number of robots and autonomous agents have been created that can emulate human emotions, but the majority of this research focuses on the social domain. In contrast, we have developed a hybrid reactive/deliberative architecture that incorporates artificial emotions to improve the general adaptive performance of a mobile robot for a navigation task. Emotions are active on multiple architectural levels, modulating the robot's decisions and actions to suit the context of its situation. Reactive emotions interact with the robot's control system, altering its parameters in response to appraisals from short-term sensor data. Deliberative emotions are learned associations that bias path planning in response to eliciting objects or events. Quantitative results are presented that demonstrate situations in which each artificial emotion can be beneficial to performance.

  17. Concurrent planning and execution for a walking robot

    NASA Astrophysics Data System (ADS)

    Simmons, Reid

    1990-07-01

    The Planetary Rover project is developing the Ambler, a novel legged robot, and an autonomous software system for walking the Ambler over rough terrain. As part of the project, we have developed a system that integrates perception, planning, and real-time control to navigate a single leg of the robot through complex obstacle courses. The system is integrated using the Task Control Architecture (TCA), a general-purpose set of utilities for building and controlling distributed mobile robot systems. The walking system, as originally implemented, utilized a sequential sense-plan-act control cycle. This report describes efforts to improve the performance of the system by concurrently planning and executing steps. Concurrency was achieved by modifying the existing sequential system to utilize TCA features such as resource management, monitors, temporal constraints, and hierarchical task trees. Performance was increased in excess of 30 percent with only a relatively modest effort to convert and test the system. The results lend support to the utility of using TCA to develop complex mobile robot systems.

  18. A review of physical security robotics at Sandia National Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roerig, S.C.

    1990-01-01

    As an outgrowth of research into physical security technologies, Sandia is investigating the role of robotics in security systems. Robotics may allow more effective utilization of guard forces, especially in scenarios where personnel would be exposed to harmful environments. Robots can provide intrusion detection and assessment functions for failed sensors or transient assets, can test existing fixed site sensors, and can gather additional intelligence and dispense delaying elements. The Robotic Security Vehicle (RSV) program for DOE/OSS is developing a fieldable prototype for an exterior physical security robot based upon a commercial four wheel drive vehicle. The RSV will be capablemore » of driving itself, being driven remotely, or being driven by an onboard operator around a site and will utilize its sensors to alert an operator to unusual conditions. The Remote Security Station (RSS) program for the Defense Nuclear Agency is developing a proof-of-principle robotic system which will be used to evaluate the role, and associated cost, of robotic technologies in exterior security systems. The RSS consists of an independent sensor pod, a mobile sensor platform and a control and display console. Sensor data fusion is used to optimize the system's intrusion detection performance. These programs are complementary, the RSV concentrates on developing autonomous mobility, while the RSS thrust is on mobile sensor employment. 3 figs.« less

  19. A Space Station robot walker and its shared control software

    NASA Technical Reports Server (NTRS)

    Xu, Yangsheng; Brown, Ben; Aoki, Shigeru; Yoshida, Tetsuji

    1994-01-01

    In this paper, we first briefly overview the update of the self-mobile space manipulator (SMSM) configuration and testbed. The new robot is capable of projecting cameras anywhere interior or exterior of the Space Station Freedom (SSF), and will be an ideal tool for inspecting connectors, structures, and other facilities on SSF. Experiments have been performed under two gravity compensation systems and a full-scale model of a segment of SSF. This paper presents a real-time shared control architecture that enables the robot to coordinate autonomous locomotion and teleoperation input for reliable walking on SSF. Autonomous locomotion can be executed based on a CAD model and off-line trajectory planning, or can be guided by a vision system with neural network identification. Teleoperation control can be specified by a real-time graphical interface and a free-flying hand controller. SMSM will be a valuable assistant for astronauts in inspection and other EVA missions.

  20. Towards autonomous locomotion: CPG-based control of smooth 3D slithering gait transition of a snake-like robot.

    PubMed

    Bing, Zhenshan; Cheng, Long; Chen, Guang; Röhrbein, Florian; Huang, Kai; Knoll, Alois

    2017-04-04

    Snake-like robots with 3D locomotion ability have significant advantages of adaptive travelling in diverse complex terrain over traditional legged or wheeled mobile robots. Despite numerous developed gaits, these snake-like robots suffer from unsmooth gait transitions by changing the locomotion speed, direction, and body shape, which would potentially cause undesired movement and abnormal torque. Hence, there exists a knowledge gap for snake-like robots to achieve autonomous locomotion. To address this problem, this paper presents the smooth slithering gait transition control based on a lightweight central pattern generator (CPG) model for snake-like robots. First, based on the convergence behavior of the gradient system, a lightweight CPG model with fast computing time was designed and compared with other widely adopted CPG models. Then, by reshaping the body into a more stable geometry, the slithering gait was modified, and studied based on the proposed CPG model, including the gait transition of locomotion speed, moving direction, and body shape. In contrast to sinusoid-based method, extensive simulations and prototype experiments finally demonstrated that smooth slithering gait transition can be effectively achieved using the proposed CPG-based control method without generating undesired locomotion and abnormal torque.

  1. Physiological and Behavioral Responses of Dairy Cattle to the Introduction of Robot Scrapers.

    PubMed

    Doerfler, Renate L; Lehermeier, Christina; Kliem, Heike; Möstl, Erich; Bernhardt, Heinz

    2016-01-01

    Autonomous mobile robot scrapers are increasingly used in order to clean the floors on dairy farms. Given the complexity of robot scraper operation, stress may occur in cows due to unpredictability of the situation. Experiencing stress can impair animal welfare and, in the long term, the health and milk production of the cows. Therefore, this study addressed potential stress responses of dairy cattle to the robot scraper after introducing the autonomous mobile machine. Thirty-six cows in total were studied on three different farms to explore possible modifications in cardiac function, behavior, and adrenocortical activity. The research protocol on each farm consisted of four experimental periods including one baseline measurement without robot scraper operation followed by three test measurements, in which cows interacted with the robotic cleaning system. Interbeat intervals were recorded in order to calculate the heart rate variability (HRV) parameter RMSSD; behavior was observed to determine time budgets; and fecal samples were collected for analysis of the cortisol metabolites concentration. A statistical analysis was carried out using linear mixed-effects models. HRV decline immediately after the introduction of the robot scraper and modified behavior in the subsequent experimental periods indicated a stress response. The cortisol metabolites concentration remained constant. It is hypothesized that after the initial phase of decrease, HRV stabilized through the behavioral adjustments of the cows in the second part of the study. Persistent alterations in behavior gave rise to the assumption that the animals' habituation process to the robot scraper was not yet completed. In summary, the present study illustrated that the cows showed minor signs of disturbance toward the robotic cleaning system. Thus, our findings suggest that dairy cattle can largely adjust their behavior to avoid aversive effects on animal welfare. Additional research can provide further insight into the development of the animal-machine interaction beyond the initial phase of robot scraper operation considered in this study.

  2. Merge Fuzzy Visual Servoing and GPS-Based Planning to Obtain a Proper Navigation Behavior for a Small Crop-Inspection Robot.

    PubMed

    Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela

    2016-02-24

    The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.

  3. Merge Fuzzy Visual Servoing and GPS-Based Planning to Obtain a Proper Navigation Behavior for a Small Crop-Inspection Robot

    PubMed Central

    Bengochea-Guevara, José M.; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela

    2016-01-01

    The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them. PMID:26927102

  4. Robot flow, clogging and jamming in confined spaces

    NASA Astrophysics Data System (ADS)

    Monaenkova, Daria; Linevich, Vadim; Goodisman, Michael A. D.; Goldman, Daniel I.

    We hypothesized that when a collection of robots operate in confined space, maximization of individual effort could negatively affect the collective performance by impeding the mobility of the individuals. To test our hypothesis, we built and programmed groups of 1-4 autonomous robotic diggers to construct a tunnel in a model cohesive soil. The robots' mobility, defined in terms of the residence time (T) required for a robot to move one body-length within the tunnel, was compared between groups of maximally active robots (mode 1), groups with different levels of activity between individuals (mode 2), and maximally active robots with a ``giving up'' behavior (mode 3), in which the robot ceased the attempt to excavate in a crowded tunnel. In small groups of two robots, T was ~3 sec and did not depend on the mode of operation. However, an increase in the number of robots caused an increase in T which depended upon mode. The residence time in groups of four robots in mode 1 (~9 sec) significantly exceeded the residence time in mode 2 and 3 (~4 sec), indicating that crowding was causing slower movement of individuals, particularly under maximum effort (mode 1). We will use our robophysical studies to discover principles of collective construction in subterranean social animals.

  5. Mobility-Dependent Motion Planning for High Speed Robotic Vehicles

    DTIC Science & Technology

    2008-07-25

    of the vehicle’s mobility in such type of terrain. Moreover, autonomous driv- ing of wheeled vehicles at high speeds adds a new level of complexity due...dynamic effects such as wheel slip, skidding, ballistic behavior, rollover, and vehicle-terrain interaction phenomena. Navigation algorithms must also...description of mobility was defined as the probability that for a given 6 ini ial v 10 ity at an initial po ition h robo will hav a non-n gative ve- loci y

  6. Learning for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.

  7. Three-dimensional sensor system using multistripe laser and stereo camera for environment recognition of mobile robots

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.

    2002-10-01

    In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.

  8. Recognition of flow in everyday life using sensor agent robot with laser range finder

    NASA Astrophysics Data System (ADS)

    Goshima, Misa; Mita, Akira

    2011-04-01

    In the present paper, we suggest an algorithm for a sensor agent robot with a laser range finder to recognize the flows of residents in the living spaces in order to achieve flow recognition in the living spaces, recognition of the number of people in spaces, and the classification of the flows. House reform is or will be demanded to prolong the lifetime of the home. Adaption for the individuals is needed for our aging society which is growing at a rapid pace. Home autonomous mobile robots will become popular in the future for aged people to assist them in various situations. Therefore we have to collect various type of information of human and living spaces. However, a penetration in personal privacy must be avoided. It is essential to recognize flows in everyday life in order to assist house reforms and aging societies in terms of adaption for the individuals. With background subtraction, extra noise removal, and the clustering based k-means method, we got an average accuracy of more than 90% from the behavior from 1 to 3 persons, and also confirmed the reliability of our system no matter the position of the sensor. Our system can take advantages from autonomous mobile robots and protect the personal privacy. It hints at a generalization of flow recognition methods in the living spaces.

  9. Probabilistic self-localisation on a qualitative map based on occlusions

    NASA Astrophysics Data System (ADS)

    Santos, Paulo E.; Martins, Murilo F.; Fenelon, Valquiria; Cozman, Fabio G.; Dee, Hannah M.

    2016-09-01

    Spatial knowledge plays an essential role in human reasoning, permitting tasks such as locating objects in the world (including oneself), reasoning about everyday actions and describing perceptual information. This is also the case in the field of mobile robotics, where one of the most basic (and essential) tasks is the autonomous determination of the pose of a robot with respect to a map, given its perception of the environment. This is the problem of robot self-localisation (or simply the localisation problem). This paper presents a probabilistic algorithm for robot self-localisation that is based on a topological map constructed from the observation of spatial occlusion. Distinct locations on the map are defined by means of a classical formalism for qualitative spatial reasoning, whose base definitions are closer to the human categorisation of space than traditional, numerical, localisation procedures. The approach herein proposed was systematically evaluated through experiments using a mobile robot equipped with a RGB-D sensor. The results obtained show that the localisation algorithm is successful in locating the robot in qualitatively distinct regions.

  10. SWARMS: Scalable sWarms of Autonomous Robots and Mobile Sensors

    DTIC Science & Technology

    2013-03-18

    Pasqualetti, Antonio Franchi , Francesco Bullo. On optimal cooperative patrolling, 2010 49th IEEE Conference on Decision and Control (CDC). 2010/12/15 00...exhibits “ global stability” Provided a complete convergence proof for the adaptive version of the range only station keeping problem. Graph Theoretic

  11. A Social Potential Fields Approach for Self-Deployment and Self-Healing in Hierarchical Mobile Wireless Sensor Networks

    PubMed Central

    González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina

    2017-01-01

    Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage. PMID:28075364

  12. A Social Potential Fields Approach for Self-Deployment and Self-Healing in Hierarchical Mobile Wireless Sensor Networks.

    PubMed

    González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina

    2017-01-09

    Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage.

  13. Control of an automated mobile manipulator using artificial immune system

    NASA Astrophysics Data System (ADS)

    Deepak, B. B. V. L.; Parhi, Dayal R.

    2016-03-01

    This paper addresses the coordination and control of a wheeled mobile manipulator (WMM) using artificial immune system. The aim of the developed methodology is to navigate the system autonomously and transport jobs and tools in manufacturing environments. This study integrates the kinematic structures of a four-axis manipulator and a differential wheeled mobile platform. The motion of the developed WMM is controlled by the complete system of parametric equation in terms of joint velocities and makes the robot to follow desired trajectories by the manipulator and platform within its workspace. The developed robot system performs its action intelligently according to the sensed environmental criteria within its search space. To verify the effectiveness of the proposed immune-based motion planner for WMM, simulations as well as experimental results are presented in various unknown environments.

  14. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    PubMed Central

    Gonzalez-de-Soto, Mariano; Pajares, Gonzalo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976

  15. New trends in robotics for agriculture: integration and assessment of a real fleet of robots.

    PubMed

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  16. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1994-06-28

    An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures.

  17. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.

    1994-01-01

    An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.

  18. Positional estimation techniques for an autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Aggarwal, J. K.

    1990-01-01

    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented.

  19. Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.

    PubMed

    Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco

    2009-01-01

    This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.

  20. A power-autonomous self-rolling wheel using ionic and capacitive actuators

    NASA Astrophysics Data System (ADS)

    Must, Indrek; Kaasik, Toomas; Baranova, Inna; Johanson, Urmas; Punning, Andres; Aabloo, Alvo

    2015-04-01

    Ionic electroactive polymer (IEAP) laminates are often considered as perspective actuator technology for mobile robotic appliances; however, only a few real proof-of-concept-stage robots have been built previously, a majority of which are dependent on an off-board power supply. In this work, a power-autonomous robot, propelled by four IEAP actuators having carbonaceous electrodes, is constructed. The robot consists of a light outer section in the form of a hollow cylinder, and a heavy inner section, referred to as the rim and the hub, respectively. The hub is connected to the rim using IEAP actuators, which form `spokes' of variable length. The effective length of the spokes is changed via charging and discharging of the capacitive IEAP actuators and a change in the effective lengths of the spokes eventuate in a rolling motion of the robot. The constructed IEAP robot takes advantage of the distinctive properties of the IEAP actuators. The IEAP actuators transform the geometry of the whole robot, while being soft and compliant. The low-voltage IEAP actuators in the robot are powered directly from an embedded single-cell lithium-ion battery, with no voltage regulation required; instead, only the input current is regulated. The charging of the actuators is commuted correspondingly to the robot's transitory position using an on-board control electronics. The constructed robot is able to roll for an extended period on a smooth surface. The locomotion of the IEAP robot is analyzed using video recognition.

  1. Navigation system for autonomous mapper robots

    NASA Astrophysics Data System (ADS)

    Halbach, Marc; Baudoin, Yvan

    1993-05-01

    This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.

  2. Gyro and Accelerometer Based Navigation System for a Mobile Autonomous Robot.

    DTIC Science & Technology

    1985-12-02

    special thanks goes to our thesis advisor Dr. Matthew Kabrisky for having the confidence to turn us loose on this project. Additionally, we would...Wordmaster Word Processor 1 Wordstar Word Processor 1 Virtual Devices Robo A 6802 Cross Assembler 1 Modem 720 Communication Program 1 CP/M Operating

  3. Implementation of a piezoelectrically actuated self-contained quadruped robot

    NASA Astrophysics Data System (ADS)

    Ho, Thanhtam; Lee, Sangyoon

    2009-05-01

    In this paper we present the development of a mesoscale self-contained quadruped mobile robot that employs two pieces of piezoelectric actuators for the bounding gait locomotion, i.e., two rear legs have the same movement and two front legs do too. The actuator named LIPCA (LIghtweight Piezoceramic Composite curved Actuator) is a piezocomposite actuator that uses a PZT layer that is sandwiched between composite materials of carbon/epoxy and glass/epoxy layers to amplify the displacement. A biomimetic concept is applied to the design of the robot in a simplified way, such that each leg of the robot has only one degree of freedom. Considering that LIPCA requires a high input voltage and possesses capacitive characteristics, a small power supply circuit using PICO chips is designed for the implementation of selfcontained mobile robot. The prototype with the weight of 125 gram and the length of 120 mm can locomote with the bounding gait. Experiments showed that the robot can locomote at about 50 mm/sec with the circuit on board and the operation time is about 5 minutes, which can be considered as a meaningful progress toward the goal of building an autonomous legged robot actuated by piezoelectric actuators.

  4. Mobile Robot for Exploring Cold Liquid/Solid Environments

    NASA Technical Reports Server (NTRS)

    Bergh, Charles; Zimmerman, Wayne

    2006-01-01

    The Planetary Autonomous Amphibious Robotic Vehicle (PAARV), now at the prototype stage of development, was originally intended for use in acquiring and analyzing samples of solid, liquid, and gaseous materials in cold environments on the shores and surfaces, and at shallow depths below the surfaces, of lakes and oceans on remote planets. The PAARV also could be adapted for use on Earth in similar exploration of cold environments in and near Arctic and Antarctic oceans and glacial and sub-glacial lakes.

  5. A task control architecture for autonomous robots

    NASA Technical Reports Server (NTRS)

    Simmons, Reid; Mitchell, Tom

    1990-01-01

    An architecture is presented for controlling robots that have multiple tasks, operate in dynamic domains, and require a fair degree of autonomy. The architecture is built on several layers of functionality, including a distributed communication layer, a behavior layer for querying sensors, expanding goals, and executing commands, and a task level for managing the temporal aspects of planning and achieving goals, coordinating tasks, allocating resources, monitoring, and recovering from errors. Application to a legged planetary rover and an indoor mobile manipulator is described.

  6. Learning Preference Models for Autonomous Mobile Robots in Complex Domains

    DTIC Science & Technology

    2010-12-01

    van Niekerk, E. Jensen, P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Nefian, and P. Mahoney , “Stanley: The robot that won the...Learning, vol. 24, pp. 123–140, 1996. 137 [277] L. Murphy and P. Newman , “Planning most-likely paths from overhead imagery,” in Inter- national Conference...B. [150] Nashman, M. [64] Nehmzow, U. [265, 266] Neto, H. [265] Newman , P. [277] Ng, A. Y. [199, 203, 222, 223, 241–243, 254] Nguyen, T. [115] Niekum

  7. Fuzzy logic control of an AGV

    NASA Astrophysics Data System (ADS)

    Kelkar, Nikhal; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The controller incorporates a fuzzy logic approach for steering and speed control, a neuro-fuzzy approach for ultrasound sensing (not discussed in this paper) and an overall expert system. The advantages of a modular system are related to portability and transportability, i.e. any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors. The speed and steering fuzzy logic controller is supervised by a 486 computer through a multi-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. This micro- controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system in which high speed computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected by a vision tracking device that transmits the X, Y coordinates of the lane marker to the control computer. Simulation and testing of these systems yielded promising results. This design, in its modularity, creates a portable autonomous fuzzy logic controller applicable to any mobile vehicle with only minor adaptations.

  8. Autonomous biomorphic robots as platforms for sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tilden, M.; Hasslacher, B.; Mainieri, R.

    1996-10-01

    The idea of building autonomous robots that can carry out complex and nonrepetitive tasks is an old one, so far unrealized in any meaningful hardware. Tilden has shown recently that there are simple, processor-free solutions to building autonomous mobile machines that continuously adapt to unknown and hostile environments, are designed primarily to survive, and are extremely resistant to damage. These devices use smart mechanics and simple (low component count) electronic neuron control structures having the functionality of biological organisms from simple invertebrates to sophisticated members of the insect and crab family. These devices are paradigms for the development of autonomousmore » machines that can carry out directed goals. The machine then becomes a robust survivalist platform that can carry sensors or instruments. These autonomous roving machines, now in an early stage of development (several proof-of-concept prototype walkers have been built), can be developed so that they are inexpensive, robust, and versatile carriers for a variety of instrument packages. Applications are immediate and many, in areas as diverse as prosthetics, medicine, space, construction, nanoscience, defense, remote sensing, environmental cleanup, and biotechnology.« less

  9. Resource allocation and supervisory control architecture for intelligent behavior generation

    NASA Astrophysics Data System (ADS)

    Shah, Hitesh K.; Bahl, Vikas; Moore, Kevin L.; Flann, Nicholas S.; Martin, Jason

    2003-09-01

    In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) was funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). As part of our research, we presented the use of a grammar-based approach to enabling intelligent behaviors in autonomous robotic vehicles. With the growth of the number of available resources on the robot, the variety of the generated behaviors and the need for parallel execution of multiple behaviors to achieve reaction also grew. As continuation of our past efforts, in this paper, we discuss the parallel execution of behaviors and the management of utilized resources. In our approach, available resources are wrapped with a layer (termed services) that synchronizes and serializes access to the underlying resources. The controlling agents (called behavior generating agents) generate behaviors to be executed via these services. The agents are prioritized and then, based on their priority and the availability of requested services, the Control Supervisor decides on a winner for the grant of access to services. Though the architecture is applicable to a variety of autonomous vehicles, we discuss its application on T4, a mid-sized autonomous vehicle developed for security applications.

  10. Steering of an automated vehicle in an unstructured environment

    NASA Astrophysics Data System (ADS)

    Kanakaraju, Sampath; Shanmugasundaram, Sathish K.; Thyagarajan, Ramesh; Hall, Ernest L.

    1999-08-01

    The purpose of this paper is to describe a high-level path planning logic, which processes the data from a vision system and an ultrasonic obstacle avoidance system and steers an autonomous mobile robot between obstacles. The test bed was an autonomous root built at University of Cincinnati, and this logic was tested and debugged on this machine. Attempts have already been made to incorporate fuzzy system on a similar robot, and this paper extends them to take advantage of the robot's ZTR capability. Using the integrated vision syste, the vehicle senses its location and orientation. A rotating ultrasonic sensor is used to map the location and size of possible obstacles. With these inputs the fuzzy logic controls the speed and the steering decisions of the robot. With the incorporation of this logic, it has been observed that Bearcat II has been very successful in avoiding obstacles very well. This was achieved in the Ground Robotics Competition conducted by the AUVS in June 1999, where it travelled a distance of 154 feet in a 10ft. wide path ridden with obstacles. This logic proved to be a significant contributing factor in this feat of Bearcat II.

  11. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots.

    PubMed

    Sherwin, Tyrone; Easte, Mikala; Chen, Andrew Tzer-Yeu; Wang, Kevin I-Kai; Dai, Wenbin

    2018-02-14

    Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.

  12. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots

    PubMed Central

    Sherwin, Tyrone; Easte, Mikala; Wang, Kevin I-Kai; Dai, Wenbin

    2018-01-01

    Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system. PMID:29443906

  13. Scene Segmentation For Autonomous Robotic Navigation Using Sequential Laser Projected Structured Light

    NASA Astrophysics Data System (ADS)

    Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.

    1987-01-01

    Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.

  14. Robonaut Mobile Autonomy: Initial Experiments

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.

    2006-01-01

    A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  15. Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis

    PubMed Central

    Shaukat, Affan; Blacker, Peter C.; Spiteri, Conrad; Gao, Yang

    2016-01-01

    In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. PMID:27879625

  16. A positional estimation technique for an autonomous land vehicle in an unstructured environment

    NASA Technical Reports Server (NTRS)

    Talluri, Raj; Aggarwal, J. K.

    1990-01-01

    This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.

  17. Human-machine interfaces based on EMG and EEG applied to robotic systems.

    PubMed

    Ferreira, Andre; Celeste, Wanderley C; Cheein, Fernando A; Bastos-Filho, Teodiano F; Sarcinelli-Filho, Mario; Carelli, Ricardo

    2008-03-26

    Two different Human-Machine Interfaces (HMIs) were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well. Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy) to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively. Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.

  18. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  19. A Mobile Sensor Network System for Monitoring of Unfriendly Environments.

    PubMed

    Song, Guangming; Zhou, Yaoxin; Ding, Fei; Song, Aiguo

    2008-11-14

    Observing microclimate changes is one of the most popular applications of wireless sensor networks. However, some target environments are often too dangerous or inaccessible to humans or large robots and there are many challenges for deploying and maintaining wireless sensor networks in those unfriendly environments. This paper presents a mobile sensor network system for solving this problem. The system architecture, the mobile node design, the basic behaviors and advanced network capabilities have been investigated respectively. A wheel-based robotic node architecture is proposed here that can add controlled mobility to wireless sensor networks. A testbed including some prototype nodes has also been created for validating the basic functions of the proposed mobile sensor network system. Motion performance tests have been done to get the positioning errors and power consumption model of the mobile nodes. Results of the autonomous deployment experiment show that the mobile nodes can be distributed evenly into the previously unknown environments. It provides powerful support for network deployment and maintenance and can ensure that the sensor network will work properly in unfriendly environments.

  20. Autonomy in Materials Research: A Case Study in Carbon Nanotube Growth (Postprint)

    DTIC Science & Technology

    2016-10-21

    built an Autonomous Research System (ARES)—an autonomous research robot capable of first-of-its-kind closed-loop iterative materials experimentation...ARES exploits advances in autonomous robotics , artificial intelligence, data sciences, and high-throughput and in situ techniques, and is able to...roles of humans and autonomous research robots , and for human-machine partnering. We believe autonomous research robots like ARES constitute a

  1. CMMAD Usability Case Study in Support of Countermine and Hazard Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor G. Walker; David I. Gertman

    2010-04-01

    During field trials, operator usability data were collected in support of lane clearing missions and hazard sensing for two robot platforms with Robot Intelligence Kernel (RIK) software and sensor scanning payloads onboard. The tests featured autonomous and shared robot autonomy levels where tasking of the robot used a graphical interface featuring mine location and sensor readings. The goal of this work was to provide insights that could be used to further technology development. The efficacy of countermine systems in terms of mobility, search, path planning, detection, and localization were assessed. Findings from objective and subjective operator interaction measures are reviewedmore » along with commentary from soldiers having taken part in the study who strongly endorse the system.« less

  2. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    NASA Technical Reports Server (NTRS)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  3. Development of autonomous eating mechanism for biomimetic robots

    NASA Astrophysics Data System (ADS)

    Jeong, Kil-Woong; Cho, Ik-Jin; Lee, Yun-Jung

    2005-12-01

    Most of the recently developed robots are human friendly robots which imitate animals or humans such as entertainment robot, bio-mimetic robot and humanoid robot. Interest for these robots are being increased because the social trend is focused on health, welfare, and graying. Autonomous eating functionality is most unique and inherent behavior of pets and animals. Most of entertainment robots and pet robots make use of internal-type battery. Entertainment robots and pet robots with internal-type battery are not able to operate during charging the battery. Therefore, if a robot has an autonomous function for eating battery as its feeds, the robot is not only able to operate during recharging energy but also become more human friendly like pets. Here, a new autonomous eating mechanism was introduced for a biomimetic robot, called ELIRO-II(Eating LIzard RObot version 2). The ELIRO-II is able to find a food (a small battery), eat and evacuate by itself. This work describe sub-parts of the developed mechanism such as head-part, mouth-part, and stomach-part. In addition, control system of autonomous eating mechanism is described.

  4. Extending the Capability of Mars Umbilical Technology Demonstrator

    NASA Technical Reports Server (NTRS)

    Houshangi, Nasser

    2001-01-01

    The objective of this project is to expand the capabilities of for the Mars Umbilical Technology Demonstrator (MUTD). The MUTD shall provide electrical power and fiber optic data cable connections between two simulated mars vehicles, 1000 in apart. ne wheeled mobile robot Omnibot is used to provide the mobile base for the system. The mate-to umbilical plate is mounted on a Cartesian robot, which is installed on the Omnibot mobile base. It is desirable to provide the operator controlling the Omnibot, the distance and direction to the target. In this report, an approach for finding the position and orientation of the mobile robot using inertial sensors and beacons is investigated. First phase of the project considered the Omnibot being on the flat surface. To deal with the uneven Mars environment, the orientation as well as position needs to be controlled. During local positioning, the information received from four ultrasonic sensors installed at the four corner of the mate-mi plate is used to identify the position of mate-to plate and mate the umbilical plates autonomously. The work proposed is the continuation of the principal investigator research effort as a participant in the 1999 NASA/ASEE Summer Faculty Fellowship Program.

  5. Situationally driven local navigation for mobile robots. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Slack, Marc Glenn

    1990-01-01

    For mobile robots to autonomously accommodate dynamically changing navigation tasks in a goal-directed fashion, they must employ navigation plans. Any such plan must provide for the robot's immediate and continuous need for guidance while remaining highly flexible in order to avoid costly computation each time the robot's perception of the world changes. Due to the world's uncertainties, creation and maintenance of navigation plans cannot involve arbitrarily complex processes, as the robot's perception of the world will be in constant flux, requiring modifications to be made quickly if they are to be of any use. This work introduces navigation templates (NaT's) which are building blocks for the construction and maintenance of rough navigation plans which capture the relationship that objects in the world have to the current navigation task. By encoding only the critical relationship between the objects in the world and the navigation task, a NaT-based navigation plan is highly flexible; allowing new constraints to be quickly incorporated into the plan and existing constraints to be updated or deleted from the plan. To satisfy the robot's need for immediate local guidance, the NaT's forming the current navigation plan are passed to a transformation function. The transformation function analyzes the plan with respect to the robot's current location to quickly determine (a few times a second) the locally preferred direction of travel. This dissertation presents NaT's and the transformation function as well as the needed support systems to demonstrate the usefulness of the technique for controlling the actions of a mobile robot operating in an uncertain world.

  6. Lidar-based door and stair detection from a mobile robot

    NASA Astrophysics Data System (ADS)

    Bansal, Mayank; Southall, Ben; Matei, Bogdan; Eledath, Jayan; Sawhney, Harpreet

    2010-04-01

    We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.

  7. Artificial evolution: a new path for artificial intelligence?

    PubMed

    Husbands, P; Harvey, I; Cliff, D; Miller, G

    1997-06-01

    Recently there have been a number of proposals for the use of artificial evolution as a radically new approach to the development of control systems for autonomous robots. This paper explains the artificial evolution approach, using work at Sussex to illustrate it. The paper revolves around a case study on the concurrent evolution of control networks and visual sensor morphologies for a mobile robot. Wider intellectual issues surrounding the work are discussed, as is the use of more abstract evolutionary simulations as a new potentially useful tool in theoretical biology.

  8. Terrain classification in navigation of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Dodds, David R.

    1991-03-01

    In this paper we describe a method of path planning that integrates terrain classification (by means of fractals) the certainty grid method of spatial representation Kehtarnavaz Griswold collision-zones Dubois Prade fuzzy temporal and spatial knowledge and non-point sized qualitative navigational planning. An initially planned (" end-to-end" ) path is piece-wise modified to accommodate known and inferred moving obstacles and includes attention to time-varying multiple subgoals which may influence a section of path at a time after the robot has begun traversing that planned path.

  9. Exception handling for sensor fusion

    NASA Astrophysics Data System (ADS)

    Chavez, G. T.; Murphy, Robin R.

    1993-08-01

    This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.

  10. SMARBot: a modular miniature mobile robot platform

    NASA Astrophysics Data System (ADS)

    Meng, Yan; Johnson, Kerry; Simms, Brian; Conforth, Matthew

    2008-04-01

    Miniature robots have many advantages over their larger counterparts, such as low cost, low power, and easy to build a large scale team for complex tasks. Heterogeneous multi miniature robots could provide powerful situation awareness capability due to different locomotion capabilities and sensor information. However, it would be expensive and time consuming to develop specific embedded system for different type of robots. In this paper, we propose a generic modular embedded system architecture called SMARbot (Stevens Modular Autonomous Robot), which consists of a set of hardware and software modules that can be configured to construct various types of robot systems. These modules include a high performance microprocessor, a reconfigurable hardware component, wireless communication, and diverse sensor and actuator interfaces. The design of all the modules in electrical subsystem, the selection criteria for module components, and the real-time operating system are described. Some proofs of concept experimental results are also presented.

  11. Cleaning Robot for Solar Panels in Solar Power Station

    NASA Astrophysics Data System (ADS)

    Hang, Lu-Bin; Shen, Cheng-Wei; Bian, Huai-Qiang; Wang, Yan

    2016-05-01

    The dust particles on solar panel surface have been a serious problem for the photovoltaic industry, a new monorail-tracked robot used for automatic cleaning of solar panel is presented in this paper. To meet the requirement of comprehensive and stable cleaning of PV array, the monorail-tracked pattern of robot is introduced based on the monorail structure technique. The running and striding mechanism are designed for mobility of robot on the solar panels. According to the carrying capacity and water circulation mechanism, a type of self-cleaning device with filtering system is developed. Combined with the computer software and communications technology, the control system is built in this robot, which can realize the functions of autonomous operation, positioning and monitoring. The application of this developed cleaning robot can actualize the Industrialization of automatic cleaning for PV components and have wide market prospect.

  12. Robot Trajectories Comparison: A Statistical Approach

    PubMed Central

    Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.

    2014-01-01

    The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618

  13. Optimization design of wireless charging system for autonomous robots based on magnetic resonance coupling

    NASA Astrophysics Data System (ADS)

    Wang, Junhua; Hu, Meilin; Cai, Changsong; Lin, Zhongzheng; Li, Liang; Fang, Zhijian

    2018-05-01

    Wireless charging is the key technology to realize real autonomy of mobile robots. As the core part of wireless power transfer system, coupling mechanism including coupling coils and compensation topology is analyzed and optimized through simulations, to achieve stable and practical wireless charging suitable for ordinary robots. Multi-layer coil structure, especially double-layer coil is explored and selected to greatly enhance coupling performance, while shape of ferrite shielding goes through distributed optimization to guarantee coil fault tolerance and cost effectiveness. On the basis of optimized coils, primary compensation topology is analyzed to adopt composite LCL compensation, to stabilize operations of the primary side under variations of mutual inductance. Experimental results show the optimized system does make sense for wireless charging application for robots based on magnetic resonance coupling, to realize long-term autonomy of robots.

  14. Event detection and localization for small mobile robots using reservoir computing.

    PubMed

    Antonelo, E A; Schrauwen, B; Stroobandt, D

    2008-08-01

    Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the input stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.

  15. A mobile robot therapist for under-supervised training with robot/computer assisted motivating systems.

    PubMed

    Shakya, Yuniya; Johnson, Michelle J

    2008-01-01

    Robot assisted therapy is a new and promising area in stroke rehabilitation and has shown to be effective in reducing motor impairment, but is a costly solution for home rehabilitation. High medical costs could be reduced if we could improve rehabilitation exercise in unsupervised environments such as the home. Hence, there is an augmented need for a cost effective rehabilitation system that can be used outside the clinic. This paper presents the design concept for an autonomous robotic assistant that is low-cost and effective in engaging the users while assisting them with therapy in any under-supervised area. We investigated how the robot assistant can support TheraDrive, our low-cost therapy system. We present the design methods and a case study demonstrating the arm and video collection system.

  16. A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring

    PubMed Central

    Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio

    2016-01-01

    This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505

  17. Neuromorphic vision sensors and preprocessors in system applications

    NASA Astrophysics Data System (ADS)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  18. Detecting submerged features in water: modeling, sensors, and measurements

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.; Bassetti, Luce

    2004-11-01

    It is becoming more important to understand the remote sensing systems and associated autonomous or semi-autonomous methodologies (robotic & mechatronics) that may be utilized in freshwater and marine aquatic environments. This need comes from several issues related not only to advances in our scientific understanding and technological capabilities, but also from the desire to insure that the risk associated with UXO (unexploded ordnance), related submerged mines, as well as submerged targets (such as submerged aquatic vegetation) and debris left from previous human activities are remotely sensed and identified followed by reduced risks through detection and removal. This paper will describe (a) remote sensing systems, (b) platforms (fixed and mobile, as well as to demonstrate (c) the value of thinking in terms of scalability as well as modularity in the design and application of new systems now being constructed within our laboratory and other laboratories, as well as future systems. New remote sensing systems - moving or fixed sensing systems, as well as autonomous or semi-autonomous robotic and mechatronic systems will be essential to secure domestic preparedness for humanitarian reasons. These remote sensing systems hold tremendous value, if thoughtfully designed for other applications which include environmental monitoring in ambient environments.

  19. Situational awareness for unmanned ground vehicles in semi-structured environments

    NASA Astrophysics Data System (ADS)

    Goodsell, Thomas G.; Snorrason, Magnus; Stevens, Mark R.

    2002-07-01

    Situational Awareness (SA) is a critical component of effective autonomous vehicles, reducing operator workload and allowing an operator to command multiple vehicles or simultaneously perform other tasks. Our Scene Estimation & Situational Awareness Mapping Engine (SESAME) provides SA for mobile robots in semi-structured scenes, such as parking lots and city streets. SESAME autonomously builds volumetric models for scene analysis. For example, a SES-AME equipped robot can build a low-resolution 3-D model of a row of cars, then approach a specific car and build a high-resolution model from a few stereo snapshots. The model can be used onboard to determine the type of car and locate its license plate, or the model can be segmented out and sent back to an operator who can view it from different viewpoints. As new views of the scene are obtained, the model is updated and changes are tracked (such as cars arriving or departing). Since the robot's position must be accurately known, SESAME also has automated techniques for deter-mining the position and orientation of the camera (and hence, robot) with respect to existing maps. This paper presents an overview of the SESAME architecture and algorithms, including our model generation algorithm.

  20. Towards Autonomous Operation of Robonaut 2

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Hart, Stephen W.; Yamokoski, J. D.

    2011-01-01

    The Robonaut 2 (R2) platform, as shown in Figure 1, was designed through a collaboration between NASA and General Motors to be a capable robotic assistant with the dexterity similar to a suited astronaut [1]. An R2 robot was sent to the International Space Station (ISS) in February 2011 and, in doing so, became the first humanoid robot in space. Its capabilities are presently being tested and expanded to increase its usefulness to the crew. Current work on R2 includes the addition of a mobility platform to allow the robot to complete tasks (such as cleaning, maintenance, or simple construction activities) both inside and outside of the ISS. To support these new activities, R2's software architecture is being developed to provide efficient ways of programming robust and autonomous behavior. In particular, a multi-tiered software architecture is proposed that combines principles of low-level feedback control with higher-level planners that accomplish behavioral goals at the task level given the run-time context, user constraints, the health of the system, and so on. The proposed architecture is shown in Figure 2. At the lowest-level, the resource level, there exists the various sensory and motor signals available to the system. The sensory signals for a robot such as R2 include multiple channels of force/torque data, joint or Cartesian positions calculated through the robot's proprioception, and signals derived from objects observable by its cameras.

  1. Vision robot with rotational camera for searching ID tags

    NASA Astrophysics Data System (ADS)

    Kimura, Nobutaka; Moriya, Toshio

    2008-02-01

    We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.

  2. Towards the automatic scanning of indoors with robots.

    PubMed

    Adán, Antonio; Quintana, Blanca; Vázquez, Andres S; Olivares, Alberto; Parra, Eduardo; Prieto, Samuel

    2015-05-19

    This paper is framed in both 3D digitization and 3D data intelligent processing research fields. Our objective is focused on developing a set of techniques for the automatic creation of simple three-dimensional indoor models with mobile robots. The document presents the principal steps of the process, the experimental setup and the results achieved. We distinguish between the stages concerning intelligent data acquisition and 3D data processing. This paper is focused on the first stage. We show how the mobile robot, which carries a 3D scanner, is able to, on the one hand, make decisions about the next best scanner position and, on the other hand, navigate autonomously in the scene with the help of the data collected from earlier scans. After this stage, millions of 3D data are converted into a simplified 3D indoor model. The robot imposes a stopping criterion when the whole point cloud covers the essential parts of the scene. This system has been tested under real conditions indoors with promising results. The future is addressed to extend the method in much more complex and larger scenarios.

  3. Towards the Automatic Scanning of Indoors with Robots

    PubMed Central

    Adán, Antonio; Quintana, Blanca; Vázquez, Andres S.; Olivares, Alberto; Parra, Eduardo; Prieto, Samuel

    2015-01-01

    This paper is framed in both 3D digitization and 3D data intelligent processing research fields. Our objective is focused on developing a set of techniques for the automatic creation of simple three-dimensional indoor models with mobile robots. The document presents the principal steps of the process, the experimental setup and the results achieved. We distinguish between the stages concerning intelligent data acquisition and 3D data processing. This paper is focused on the first stage. We show how the mobile robot, which carries a 3D scanner, is able to, on the one hand, make decisions about the next best scanner position and, on the other hand, navigate autonomously in the scene with the help of the data collected from earlier scans. After this stage, millions of 3D data are converted into a simplified 3D indoor model. The robot imposes a stopping criterion when the whole point cloud covers the essential parts of the scene. This system has been tested under real conditions indoors with promising results. The future is addressed to extend the method in much more complex and larger scenarios. PMID:25996513

  4. Lunar exploration rover program developments

    NASA Technical Reports Server (NTRS)

    Klarer, P. R.

    1994-01-01

    The Robotic All Terrain Lunar Exploration Rover (RATLER) design concept began at Sandia National Laboratories in late 1991 with a series of small, proof-of-principle, working scale models. The models proved the viability of the concept for high mobility through mechanical simplicity, and eventually received internal funding at Sandia National Laboratories for full scale, proof-of-concept prototype development. Whereas the proof-of-principle models demonstrated the mechanical design's capabilities for mobility, the full scale proof-of-concept design currently under development is intended to support field operations for experiments in telerobotics, autonomous robotic operations, telerobotic field geology, and advanced man-machine interface concepts. The development program's current status is described, including an outline of the program's work over the past year, recent accomplishments, and plans for follow-on development work.

  5. Machine vision and appearance based learning

    NASA Astrophysics Data System (ADS)

    Bernstein, Alexander

    2017-03-01

    Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.

  6. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  7. A tesselated probabilistic representation for spatial robot perception and navigation

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto

    1989-01-01

    The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations.

  8. External force/velocity control for an autonomous rehabilitation robot

    NASA Astrophysics Data System (ADS)

    Saekow, Peerayuth; Neranon, Paramin; Smithmaitrie, Pruittikorn

    2018-01-01

    Stroke is a primary cause of death and the leading cause of permanent disability in adults. There are many stroke survivors, who live with a variety of levels of disability and always need rehabilitation activities on daily basis. Several studies have reported that usage of rehabilitation robotic devices shows the better improvement outcomes in upper-limb stroke patients than the conventional therapy-nurses or therapists actively help patients with exercise-based rehabilitation. This research focuses on the development of an autonomous robotic trainer designed to guide a stroke patient through an upper-limb rehabilitation task. The robotic device was designed and developed to automate the reaching exercise as mentioned. The designed robotic system is made up of a four-wheel omni-directional mobile robot, an ATI Gamma multi-axis force/torque sensor used to measure contact force and a microcontroller real-time operating system. Proportional plus Integral control was adapted to control the overall performance and stability of the autonomous assistive robot. External force control was successfully implemented to establish the behavioral control strategy for the robot force and velocity control scheme. In summary, the experimental results indicated satisfactorily stable performance of the robot force and velocity control can be considered acceptable. The gain tuning for proportional integral (PI) velocity control algorithms was suitably estimated using the Ziegler-Nichols method in which the optimized proportional and integral gains are 0.45 and 0.11, respectively. Additionally, the PI external force control gains were experimentally tuned using the trial and error method based on a set of experiments which allow a human participant moves the robot along the constrained circular path whilst attempting to minimize the radial force. The performance was analyzed based on the root mean square error (E_RMS) of the radial forces, in which the lower the variation in radial forces, the better the performance of the system. The outstanding performance of the tests as specified by the E_RMS of the radial force was observed with proportional and integral gains of Kp = 0.7 and Ki = 0.75, respectively.

  9. Autonomous robot software development using simple software components

    NASA Astrophysics Data System (ADS)

    Burke, Thomas M.; Chung, Chan-Jin

    2004-10-01

    Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.

  10. Material handling robot system for flow-through storage applications

    NASA Astrophysics Data System (ADS)

    Dill, James F.; Candiloro, Brian; Downer, James; Wiesman, Richard; Fallin, Larry; Smith, Ron

    1999-01-01

    This paper describes the design, development and planned implementation of a system of mobile robots for use in flow through storage applications. The robots are being designed with on-board embedded controls so that they can perform their tasks as semi-autonomous workers distributed within a centrally controlled network. On the storage input side, boxes will be identified by bar-codes and placed into preassigned flow through bins. On the shipping side, orders will be forwarded to the robots from a central order processing station and boxes will be picked from designated storage bins following proper sequencing to permit direct loading into trucks for shipping. Because of the need to maintain high system availability, a distributed control strategy has been selected. When completed, the system will permit robots to be dynamically reassigned responsibilities if an individual unit fails. On-board health diagnostics and condition monitoring will be used to maintain high reliability of the units.

  11. Fuzzy Sets in Dynamic Adaptation of Parameters of a Bee Colony Optimization for Controlling the Trajectory of an Autonomous Mobile Robot

    PubMed Central

    Amador-Angulo, Leticia; Mendoza, Olivia; Castro, Juan R.; Rodríguez-Díaz, Antonio; Melin, Patricia; Castillo, Oscar

    2016-01-01

    A hybrid approach composed by different types of fuzzy systems, such as the Type-1 Fuzzy Logic System (T1FLS), Interval Type-2 Fuzzy Logic System (IT2FLS) and Generalized Type-2 Fuzzy Logic System (GT2FLS) for the dynamic adaptation of the alpha and beta parameters of a Bee Colony Optimization (BCO) algorithm is presented. The objective of the work is to focus on the BCO technique to find the optimal distribution of the membership functions in the design of fuzzy controllers. We use BCO specifically for tuning membership functions of the fuzzy controller for trajectory stability in an autonomous mobile robot. We add two types of perturbations in the model for the Generalized Type-2 Fuzzy Logic System to better analyze its behavior under uncertainty and this shows better results when compared to the original BCO. We implemented various performance indices; ITAE, IAE, ISE, ITSE, RMSE and MSE to measure the performance of the controller. The experimental results show better performances using GT2FLS then by IT2FLS and T1FLS in the dynamic adaptation the parameters for the BCO algorithm. PMID:27618062

  12. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    PubMed

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  13. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    PubMed Central

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-01

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042

  14. PRIMUS: autonomous navigation in open terrain with a tracked vehicle

    NASA Astrophysics Data System (ADS)

    Schaub, Guenter W.; Pfaendner, Alfred H.; Schaefer, Christoph

    2004-09-01

    The German experimental robotics program PRIMUS (PRogram for Intelligent Mobile Unmanned Systems) is focused on solutions for autonomous driving in unknown open terrain, over several project phases under specific realization aspects for more than 12 years. The main task of the program is to develop algorithms for a high degree of autonomous navigation skills with off-the-shelf available hardware/sensor technology and to integrate this into military vehicles. For obstacle detection a Dornier-3D-LADAR is integrated on a tracked vehicle "Digitized WIESEL 2". For road-following a digital video camera and a visual perception module from the Universitaet der Bundeswehr Munchen (UBM) has been integrated. This paper gives an overview of the PRIMUS program with a focus on the last program phase D (2001 - 2003). This includes the system architecture, the description of the modes of operation and the technology development with the focus on obstacle avoidance and obstacle classification using a 3-D LADAR. A collection of experimental results and a short look at the next steps in the German robotics program will conclude the paper.

  15. Target Trailing With Safe Navigation for Maritime Autonomous Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Kuwata, Yoshiaki; Zarzhitsky, Dimitri V.

    2013-01-01

    This software implements a motion-planning module for a maritime autonomous surface vehicle (ASV). The module trails a given target while also avoiding static and dynamic surface hazards. When surface hazards are other moving boats, the motion planner must apply International Regulations for Avoiding Collisions at Sea (COLREGS). A key subset of these rules has been implemented in the software. In case contact with the target is lost, the software can receive and follow a "reacquisition route," provided by a complementary system, until the target is reacquired. The programmatic intention is that the trailed target is a submarine, although any mobile naval platform could serve as the target. The algorithmic approach to combining motion with a (possibly moving) goal location, while avoiding local hazards, may be applicable to robotic rovers, automated landing systems, and autonomous airships. The software operates in JPL s CARACaS (Control Architecture for Robotic Agent Command and Sensing) software architecture and relies on other modules for environmental perception data and information on the predicted detectability of the target, as well as the low-level interface to the boat controls.

  16. M.I.N.G., Mars Investment for a New Generation: Robotic construction of a permanently manned Mars base

    NASA Technical Reports Server (NTRS)

    Amos, Jeff; Beeman, Randy; Brown, Susan; Calhoun, John; Hill, John; Howorth, Lark; Mcfaden, Clay; Nguyen, Paul; Reid, Philip; Rexrode, Stuart

    1989-01-01

    A basic procedure for robotically constructing a manned Mars base is outlined. The research procedure was divided into three areas: environment, robotics, and habitat. The base as designed will consist of these components: two power plants, communication facilities, a habitat complex, and a hangar, a garage, recreation and manufacturing facilities. The power plants will be self-contained nuclear fission reactors placed approx. 1 km from the base for safety considerations. The base communication system will use a combination of orbiting satellites and surface relay stations. This system is necessary for robotic contact with Phobos and any future communication requirements. The habitat complex will consist of six self-contained modules: core, biosphere, science, living quarters, galley/storage, and a sick bay which will be brought from Phobos. The complex will be set into an excavated hole and covered with approximately 0.5 m of sandbags to provide radiation protection for the astronauts. The recreation, hangar, garage, and manufacturing facilities will each be transformed from the four one-way landers. The complete complex will be built by autonomous, artificially intelligent robots. Robots incorporated into the design are as follows: Large Modular Construction Robots with detachable arms capable of large scale construction activities; Small Maneuverable Robotic Servicers capable of performing delicate tasks normally requiring a suited astronaut; and a trailer vehicle with modular type attachments to complete specific tasks; and finally, Mobile Autonomous Rechargeable Transporters capable of transferring air and water from the manufacturing facility to the habitat complex.

  17. M.I.N.G., Mars Investment for a New Generation: Robotic construction of a permanently manned Mars base

    NASA Astrophysics Data System (ADS)

    Amos, Jeff; Beeman, Randy; Brown, Susan; Calhoun, John; Hill, John; Howorth, Lark; McFaden, Clay; Nguyen, Paul; Reid, Philip; Rexrode, Stuart

    1989-05-01

    A basic procedure for robotically constructing a manned Mars base is outlined. The research procedure was divided into three areas: environment, robotics, and habitat. The base as designed will consist of these components: two power plants, communication facilities, a habitat complex, and a hanger, a garage, recreation and manufacturing facilities. The power plants will be self-contained nuclear fission reactors placed approx. 1 km from the base for safety considerations. The base communication system will use a combination of orbiting satellites and surface relay stations. This system is necessary for robotic contact with Phobos and any future communication requirements. The habitat complex will consist of six self-contained modules: core, biosphere, science, living quarters, galley/storage, and a sick bay which will be brought from Phobos. The complex will be set into an excavated hole and covered with approximately 0.5 m of sandbags to provide radiation protection for the astronauts. The recreation, hangar, garage, and manufacturing facilities will each be transformed from the four one-way landers. The complete complex will be built by autonomous, artificially intelligent robots. Robots incorporated into the design are as follows: Large Modular Construction Robots with detachable arms capable of large scale construction activities; Small Maneuverable Robotic Servicers capable of performing delicate tasks normally requiring a suited astronaut; and a trailer vehicle with modular type attachments to complete specific tasks; and finally, Mobile Autonomous Rechargeable Transporters capable of transferring air and water from the manufacturing facility to the habitat complex.

  18. Distributed Planning and Control for Teams of Cooperating Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, L.E.

    2004-06-15

    This CRADA project involved the cooperative research of investigators in ORNL's Center for Engineering Science Advanced Research (CESAR) with researchers at Caterpillar, Inc. The subject of the research was the development of cooperative control strategies for autonomous vehicles performing applications of interest to Caterpillar customers. The project involved three Phases of research, conducted over the time period of November 1998 through December 2001. This project led to the successful development of several technologies and demonstrations in realistic simulation that illustrated the effectiveness of the control approaches for distributed planning and cooperation in multi-robot teams.

  19. The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Han, Baoling

    2016-11-01

    The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.

  20. Efforts toward an autonomous wheelchair - biomed 2011.

    PubMed

    Barrett, Steven; Streeter, Robert

    2011-01-01

    An autonomous wheelchair is in development to provide mobility to those with significant physical challenges. The overall goal of the project is to develop a wheelchair that is fully autonomous with the ability to navigate about an environment and negotiate obstacles. As a starting point for the project, we have reversed engineered the joystick control system of an off-the-shelf commercially available wheelchair. The joystick control has been replaced with a microcontroller based system. The microcontroller has the capability to interface with a number of subsystems currently under development including wheel odometers, obstacle avoidance sensors, and ultrasonic-based wall sensors. This paper will discuss the microcontroller based system and provide a detailed system description. Results of this study may be adapted to commercial or military robot control.

  1. Habituation: a non-associative learning rule design for spiking neurons and an autonomous mobile robots implementation.

    PubMed

    Cyr, André; Boukadoum, Mounir

    2013-03-01

    This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information.

  2. Mobile transporter path planning

    NASA Technical Reports Server (NTRS)

    Baffes, Paul; Wang, Lui

    1990-01-01

    The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.

  3. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  4. Ego-location and situational awareness in semistructured environments

    NASA Astrophysics Data System (ADS)

    Goodsell, Thomas G.; Snorrason, Magnus S.; Stevens, Mark R.; Stube, Brian; McBride, Jonah

    2003-09-01

    The success of any potential application for mobile robots depends largely on the specific environment where the application takes place. Practical applications are rarely found in highly structured environments, but unstructured environments (such as natural terrain) pose major challenges to any mobile robot. We believe that semi-structured environments-such as parking lots-provide a good opportunity for successful mobile robot applications. Parking lots tend to be flat and smooth, and cars can be uniquely identified by their license plates. Our scenario is a parking lot where only known vehicles are supposed to park. The robot looks for vehicles that do not belong in the parking lot. It checks both license plates and vehicle types, in case the plate is stolen from an approved vehicle. It operates autonomously, but reports back to a guard who verifies its performance. Our interest is in developing the robot's vision system, which we call Scene Estimation & Situational Awareness Mapping Engine (SESAME). In this paper, we present initial results from the development of two SESAME subsystems, the ego-location and license plate detection systems. While their ultimate goals are obviously quite different, our design demonstrates that by sharing intermediate results, both tasks can be significantly simplified. The inspiration for this design approach comes from the basic tenets of Situational Awareness (SA), where the benefits of holistic perception are clearly demonstrated over the more typical designs that attempt to solve each sensing/perception problem in isolation.

  5. Intelligence Level Performance Standards Research for Autonomous Vehicles

    PubMed Central

    Bostelman, Roger B.; Hong, Tsai H.; Messina, Elena

    2017-01-01

    United States and European safety standards have evolved to protect workers near Automatic Guided Vehicles (AGV’s). However, performance standards for AGV’s and mobile robots have only recently begun development. Lessons can be learned from research and standards efforts for mobile robots applied to emergency response and military applications. Research challenges, tests and evaluations, and programs to develop higher intelligence levels for vehicles can also used to guide industrial AGV developments towards more adaptable and intelligent systems. These other efforts also provide useful standards development criteria for AGV performance test methods. Current standards areas being considered for AGVs are for docking, navigation, obstacle avoidance, and the ground truth systems that measure performance. This paper provides a look to the future with standards developments in both the performance of vehicles and the dynamic perception systems that measure intelligent vehicle performance. PMID:28649189

  6. Intelligence Level Performance Standards Research for Autonomous Vehicles.

    PubMed

    Bostelman, Roger B; Hong, Tsai H; Messina, Elena

    2015-01-01

    United States and European safety standards have evolved to protect workers near Automatic Guided Vehicles (AGV's). However, performance standards for AGV's and mobile robots have only recently begun development. Lessons can be learned from research and standards efforts for mobile robots applied to emergency response and military applications. Research challenges, tests and evaluations, and programs to develop higher intelligence levels for vehicles can also used to guide industrial AGV developments towards more adaptable and intelligent systems. These other efforts also provide useful standards development criteria for AGV performance test methods. Current standards areas being considered for AGVs are for docking, navigation, obstacle avoidance, and the ground truth systems that measure performance. This paper provides a look to the future with standards developments in both the performance of vehicles and the dynamic perception systems that measure intelligent vehicle performance.

  7. Hardware Design and Testing of SUPERball, A Modular Tensegrity Robot

    NASA Technical Reports Server (NTRS)

    Sabelhaus, Andrew P.; Bruce, Jonathan; Caluwaerts, Ken; Chen, Yangxin; Lu, Dizhou; Liu, Yuejia; Agogino, Adrian K.; SunSpiral, Vytas; Agogino, Alice M.

    2014-01-01

    We are developing a system of modular, autonomous "tensegrity end-caps" to enable the rapid exploration of untethered tensegrity robot morphologies and functions. By adopting a self-contained modular approach, different end-caps with various capabilities (such as peak torques, or motor speeds), can be easily combined into new tensegrity robots composed of rods, cables, and actuators of different scale (such as in length, mass, peak loads, etc). As a first step in developing this concept, we are in the process of designing and testing the end-caps for SUPERball (Spherical Underactuated Planetary Exploration Robot), a project at the Dynamic Tensegrity Robotics Lab (DTRL) within NASA Ames's Intelligent Robotics Group. This work discusses the evolving design concepts and test results that have gone into the structural, mechanical, and sensing aspects of SUPERball. This representative tensegrity end-cap design supports robust and repeatable untethered mobility tests of the SUPERball, while providing high force, high displacement actuation, with a low-friction, compliant cabling system.

  8. Autonomous Mobile Robots.

    DTIC Science & Technology

    1986-01-30

    main food. Elephant brains are three times human size. Elephants form matriarchal tribal societies and exhibit complex behavior. Indian domestic...line triangulation. In XVth Int’l Congress of Photogrammetry and Remote Sensing, Commission III, Part 3a, - ’ * pages 342-362. Int’l Society for...a rigid motion. Psychometrlka 35(2):245-255, June, 1970. [15] C.C. Slama (edltor-in-chlef). I Manual of photo grammfetry. American Society of

  9. DEMONSTRATION OF AUTONOMOUS AIR MONITORING THROUGH ROBOTICS

    EPA Science Inventory

    This project included modifying an existing teleoperated robot to include autonomous navigation, large object avoidance, and air monitoring and demonstrating that prototype robot system in indoor and outdoor environments. An existing teleoperated "Surveyor" robot developed by ARD...

  10. Collective search by mobile robots using alpha-beta coordination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsmith, S.Y.; Robinett, R. III

    1998-04-01

    One important application of mobile robots is searching a geographical region to locate the origin of a specific sensible phenomenon. Mapping mine fields, extraterrestrial and undersea exploration, the location of chemical and biological weapons, and the location of explosive devices are just a few potential applications. Teams of robotic bloodhounds have a simple common goal; to converge on the location of the source phenomenon, confirm its intensity, and to remain aggregated around it until directed to take some other action. In cases where human intervention through teleoperation is not possible, the robot team must be deployed in a territory withoutmore » supervision, requiring an autonomous decentralized coordination strategy. This paper presents the alpha beta coordination strategy, a family of collective search algorithms that are based on dynamic partitioning of the robotic team into two complementary social roles according to a sensor based status measure. Robots in the alpha role are risk takers, motivated to improve their status by exploring new regions of the search space. Robots in the beta role are motivated to improve but are conservative, and tend to remain aggregated and stationary until the alpha robots have identified better regions of the search space. Roles are determined dynamically by each member of the team based on the status of the individual robot relative to the current state of the collective. Partitioning the robot team into alpha and beta roles results in a balance between exploration and exploitation, and can yield collective energy savings and improved resistance to sensor noise and defectors. Alpha robots waste energy exploring new territory, and are more sensitive to the effects of ambient noise and to defectors reporting inflated status. Beta robots conserve energy by moving in a direct path to regions of confirmed high status.« less

  11. Controlling Herds of Cooperative Robots

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.

    2006-01-01

    A document poses, and suggests a program of research for answering, questions of how to achieve autonomous operation of herds of cooperative robots to be used in exploration and/or colonization of remote planets. In a typical scenario, a flock of mobile sensory robots would be deployed in a previously unexplored region, one of the robots would be designated the leader, and the leader would issue commands to move the robots to different locations or aim sensors at different targets to maximize scientific return. It would be necessary to provide for this hierarchical, cooperative behavior even in the face of such unpredictable factors as terrain obstacles. A potential-fields approach is proposed as a theoretical basis for developing methods of autonomous command and guidance of a herd. A survival-of-the-fittest approach is suggested as a theoretical basis for selection, mutation, and adaptation of a description of (1) the body, joints, sensors, actuators, and control computer of each robot, and (2) the connectivity of each robot with the rest of the herd, such that the herd could be regarded as consisting of a set of artificial creatures that evolve to adapt to a previously unknown environment. A distributed simulation environment has been developed to test the proposed approaches in the Titan environment. One blimp guides three surface sondes via a potential field approach. The results of the simulation demonstrate that the method used for control is feasible, even if significant uncertainty exists in the dynamics and environmental models, and that the control architecture provides the autonomy needed to enable surface science data collection.

  12. Mobile autonomous robotic apparatus for radiologic characterization

    DOEpatents

    Dudar, Aed M.; Ward, Clyde R.; Jones, Joel D.; Mallet, William R.; Harpring, Larry J.; Collins, Montenius X.; Anderson, Erin K.

    1999-01-01

    A mobile robotic system that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console.

  13. Mobile autonomous robotic apparatus for radiologic characterization

    DOEpatents

    Dudar, A.M.; Ward, C.R.; Jones, J.D.; Mallet, W.R.; Harpring, L.J.; Collins, M.X.; Anderson, E.K.

    1999-08-10

    A mobile robotic system is described that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console. 4 figs.

  14. Navigating a Mobile Robot Across Terrain Using Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Howard, Ayanna; Bon, Bruce

    2003-01-01

    A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.

  15. Self mobile space manipulator project

    NASA Technical Reports Server (NTRS)

    Brown, H. Ben; Friedman, Mark; Xu, Yangsheng; Kanade, Takeo

    1992-01-01

    A relatively simple, modular, low mass, low cost robot is being developed for space EVA that is large enough to be independently mobile on a space station or platform exterior, yet versatile enough to accomplish many vital tasks. The robot comprises two long flexible links connected by a rotary joint, with 2-DOF 'wrist' joints and grippers at each end. It walks by gripping pre-positioned attachment points, such as trusswork nodes, and alternately shifting its base of support from one foot (gripper) to the other. The robot can perform useful tasks such as visual inspection, material transport, and light assembly by manipulating objects with one gripper, while stabilizing itself with the other. At SOAR '90, we reported development of 1/3 scale robot hardware, modular trusswork to serve as a locomotion substrate, and a gravity compensation system to allow laboratory tests of locomotion strategies on the horizontal face of the trusswork. In this paper, we report on project progress including the development of: (1) adaptive control for automatic adjustment to loads; (2) enhanced manipulation capabilities; (3) machine vision, including the use of neural nets, to guide autonomous locomotion; (4) locomotion between orthogonal trusswork faces; and (5) improved facilities for gravity compensation and telerobotic control.

  16. Real-time adaptive off-road vehicle navigation and terrain classification

    NASA Astrophysics Data System (ADS)

    Muller, Urs A.; Jackel, Lawrence D.; LeCun, Yann; Flepp, Beat

    2013-05-01

    We are developing a complete, self-contained autonomous navigation system for mobile robots that learns quickly, uses commodity components, and has the added benefit of emitting no radiation signature. It builds on the au­tonomous navigation technology developed by Net-Scale and New York University during the Defense Advanced Research Projects Agency (DARPA) Learning Applied to Ground Robots (LAGR) program and takes advantage of recent scientific advancements achieved during the DARPA Deep Learning program. In this paper we will present our approach and algorithms, show results from our vision system, discuss lessons learned from the past, and present our plans for further advancing vehicle autonomy.

  17. Quantifying Traversability of Terrain for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Seraji, Homayoun; Werger, Barry

    2005-01-01

    A document presents an updated discussion on a method of autonomous navigation for a robotic vehicle navigating across rough terrain. The method involves, among other things, the use of a measure of traversability, denoted the fuzzy traversability index, which embodies the information about the slope and roughness of terrain obtained from analysis of images acquired by cameras mounted on the robot. The improvements presented in the report focus on the use of the fuzzy traversability index to generate a traversability map and a grid map for planning the safest path for the robot. Once grid traversability values have been computed, they are utilized for rejecting unsafe path segments and for computing a traversalcost function for ranking candidate paths, selected by a search algorithm, from a specified initial position to a specified final position. The output of the algorithm is a set of waypoints designating a path having a minimal-traversal cost.

  18. An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry

    NASA Astrophysics Data System (ADS)

    Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro

    This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.

  19. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    A judge for the NASA-WPI Sample Return Robot Centennial Challenge follows a robot on the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  20. A learning-based semi-autonomous controller for robotic exploration of unknown disaster scenes while searching for victims.

    PubMed

    Doroodgar, Barzin; Liu, Yugang; Nejat, Goldie

    2014-12-01

    Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing a human operator to cooperate and share such tasks with a rescue robot as navigation, exploration, and victim identification. In this paper, we present a unique hierarchical reinforcement learning-based semi-autonomous control architecture for rescue robots operating in cluttered and unknown urban search and rescue (USAR) environments. The aim of the controller is to enable a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A direction-based exploration technique is integrated in the controller to expand the search area of the robot via the classification of regions and the rubble piles within these regions. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed HRL-based semi-autonomous controller to unknown cluttered scenes with different sizes and varying types of configurations.

  1. Utah State University's T2 ODV mobility analysis

    NASA Astrophysics Data System (ADS)

    Davidson, Morgan E.; Bahl, Vikas; Wood, Carl G.

    2000-07-01

    In response to ultra-high maneuverability vehicle requirements, Utah State University (USU) has developed an autonomous vehicle with unique mobility and maneuverability capabilities. This paper describes a study of the mobility of the USU T2 Omni-Directional Vehicle (ODV). The T2 vehicle is a mid-scale (625 kg), second-generation ODV mobile robot with six independently driven and steered wheel assemblies. The six wheel, independent steering system is capable of unlimited steering rotation, presenting a unique solution to enhanced vehicle mobility requirements. This mobility study focuses on energy consumption in three basic experiments, comparing two modes of steering: Ackerman and ODV. The experiments are all performed on the same vehicle without any physical changes to the vehicle itself, providing a direct comparison these two steering methodologies. A computer simulation of the T2 mechanical and control system dynamics is described.

  2. Magician Simulator. A Realistic Simulator for Heterogenous Teams of Autonomous Robots

    DTIC Science & Technology

    2011-01-18

    IMU, and LIDAR systems for identifying and tracking mobile OOI at long range (>20m), providing early warnings and allowing neutralization from a... LIDAR and Computer Vision template-based feature tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous...Locali- zation and Mapping ( SLAM ). Our system contains two maps, a physical map and an influence map (location of hostile OOI, explored and unexplored

  3. Autonomous detection of indoor and outdoor signs

    NASA Astrophysics Data System (ADS)

    Holden, Steven; Snorrason, Magnus; Goodsell, Thomas; Stevens, Mark R.

    2005-05-01

    Most goal-oriented mobile robot tasks involve navigation to one or more known locations. This is generally done using GPS coordinates and landmarks outdoors, or wall-following and fiducial marks indoors. Such approaches ignore the rich source of navigation information that is already in place for human navigation in all man-made environments: signs. A mobile robot capable of detecting and reading arbitrary signs could be tasked using directions that are intuitive to hu-mans, and it could report its location relative to intuitive landmarks (a street corner, a person's office, etc.). Such ability would not require active marking of the environment and would be functional in the absence of GPS. In this paper we present an updated version of a system we call Sign Understanding in Support of Autonomous Navigation (SUSAN). This system relies on cues common to most signs, the presence of text, vivid color, and compact shape. By not relying on templates, SUSAN can detect a wide variety of signs: traffic signs, street signs, store-name signs, building directories, room signs, etc. In this paper we focus on the text detection capability. We present results summarizing probability of detection and false alarm rate across many scenes containing signs of very different designs and in a variety of lighting conditions.

  4. Compliant-linkage kinematic design for multi-degree-of-freedom mobile robots

    NASA Astrophysics Data System (ADS)

    Borenstein, Johann

    1993-05-01

    Multi-degree-of-freedom (MDOF) vehicles have many potential advantages over conventional (i.e., 2-DOF) vehicles. For example, MDOF vehicles can travel sideways and they can negotiate tight turns more easily. In addition, some MDOF designs provide better payload capability, better traction, and improved static and dynamic stability. However, MDOF vehicles with more than three degrees-of-freedom are difficult to control because of their overconstrained nature. These difficulties translate into severe wheel slippage or jerky motion under certain driving conditions. In the past, these problems limited the use of MDOF vehicles to applications where the vehicle would follow a guide-wire, which would correct wheel slippage and control errors. By contrast, autonomous or semi-autonomous mobile robots usually rely on dead-reckoning between periodic absolute position updates and their performance is diminished by excessive wheel slippage. This paper introduces a new concept in the kinematic design of MDOF vehicles. This concept is based on the provision of a compliant linkage between drive wheels or drive axles. Simulation results indicate that compliant linkage allows to overcome the control problems found in conventional MDOF vehicles and reduces the amount of wheel slippage to the same level (or less) than the amount of slippage found on a comparable 2-DOF vehicle.

  5. The development of a lightweight modular compliant surface bio-inspired robot

    NASA Astrophysics Data System (ADS)

    Stone, David L.; Cranney, John

    2004-09-01

    The DARPA Sponsored Compliant Surface Robotics (CSR) program pursues development of a high mobility, lightweight, modular, morphable robot for military forces in the field and for other industrial uses. The USTLAB effort builds on proof of concept feasibility studies and demonstration of a 4, 6, or 8 wheeled modular vehicle with articulated leg-wheel assemblies. In Phase I, basic open plant stability was proven for climbing over obstacles of ~18 inches high and traversing ~75 degree inclines (up, down, or sideways) in a platform of approximately 15 kilograms. At the completion of Phase II, we have completed mechanical and electronics engineering design and achieved changes which currently enable future work in active articulation, enabling autonomous reconfiguration for a wide variety of terrains, including upside down operations (in case of flip over), and we have reduced platform weight by one third. Currently the vehicle weighs 10 kilograms and will grow marginally as additional actuation, MEMS based organic sensing, payload, and autonomous processing is added. The CSR vehicle"s modular spider-like configuration facilitates adaptation to many uses and compliance over rugged terrain. The developmental process and the vehicle characteristics will be discussed.

  6. Gesture-Based Robot Control with Variable Autonomy from the JPL Biosleeve

    NASA Technical Reports Server (NTRS)

    Wolf, Michael T.; Assad, Christopher; Vernacchia, Matthew T.; Fromm, Joshua; Jethani, Henna L.

    2013-01-01

    This paper presents a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via a novel device, called the BioSleeve, which packages dry-contact surface electromyography (EMG) and an inertial measurement unit (IMU) into a sleeve worn on the forearm. The BioSleeve's accompanying algorithms can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. These gestures and positions are mapped to robot commands that, to varying degrees, integrate with the robot's perception of its environment and its ability to complete tasks autonomously. This flexible approach enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. The BioSleeve is meant for portable field use; unlike other gesture recognition systems, use of the BioSleeve for robot control is invariant to lighting conditions, occlusions, and the human-robot spatial relationship and does not encumber the user's hands. The BioSleeve control approach has been implemented on three robot types, and we present proof-of-principle demonstrations with mobile ground robots, manipulation robots, and prosthetic hands.

  7. Immunology-directed methods for distributed robotics: a novel immunity-based architecture for robust control and coordination

    NASA Astrophysics Data System (ADS)

    Singh, Surya P. N.; Thayer, Scott M.

    2002-02-01

    This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.

  8. Control of a free-flying robot manipulator system

    NASA Technical Reports Server (NTRS)

    Alexander, H.; Cannon, R. H., Jr.

    1985-01-01

    The goal of the research is to develop and test control strategies for a self-contained, free flying space robot. Such a robot would perform operations in space similar to those currently handled by astronauts during extravehicular activity (EVA). The focus of the work is to develop and carry out a program of research with a series of physical Satellite Robot Simulator Vehicles (SRSV's), two-dimensionally freely mobile laboratory models of autonomous free-flying space robots such as might perform extravehicular functions associated with operation of a space station or repair of orbiting satellites. The development of the SRSV and of some of the controller subsystems are discribed. The two-link arm was fitted to the SRSV base, and researchers explored the open-loop characteristics of the arm and thruster actuators. Work began on building the software foundation necessary for use of the on-board computer, as well as hardware and software for a local vision system for target identification and tracking.

  9. Hardware platform for multiple mobile robots

    NASA Astrophysics Data System (ADS)

    Parzhuber, Otto; Dolinsky, D.

    2004-12-01

    This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.

  10. Workshop on Critical ORI Issues Held in Bordeaux, France on OCtober 27 - 29, 1992. Program and Abstracts.

    DTIC Science & Technology

    1992-10-29

    These people try to make their robotic vehicle as intelligent and autonomous as possible with the current state of technology. The robot only interacts... Robotics Peter J. Burt David Sarnoff Research Center Princeton, NJ 08543-5300 U.S.A. The ability of an operator to drive a remotely piloted vehicle depends...RESUPPLY - System which can rapidly and autonomously load and unload palletized ammunition. (18) AUTONOMOUS COMBAT EVACUATION VEHICLE - Robotic arms

  11. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    DTIC Science & Technology

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...June 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW TECHNIQUE FOR ROBOT VISION IN AUTONOMOUS UNDERWATER...Developing a technique for underwater robot vision is a key factor in establishing autonomy in underwater vehicles. A new technique is developed and

  12. Close-range sensors for small unmanned bottom vehicles: update

    NASA Astrophysics Data System (ADS)

    Bernstein, Charles L.

    2000-07-01

    The Surf Zone Reconnaissance Project is developing sensors for small, autonomous, Underwater Bottom-crawling Vehicles. The objective is to enable small, crawling robots to autonomously detect and classify mines and obstacles on the ocean bottom in depths between 0 and 10 feet. We have identified a promising set of techniques that will exploit the electromagnetic, shape, texture, image, and vibratory- modal features of this images. During FY99 and FY00 we have worked toward refining these techniques. Signature data sets have been collected for a standard target set to facilitate the development of sensor fusion and target detection and classification algorithms. Specific behaviors, termed microbehaviors, are developed to utilize the robot's mobility to position and operate the sensors. A first generation, close-range sensor suite, composed of 5 sensors, will be completed and tested on a crawling platform in FY00, and will be further refined and demonstrated in FY01 as part of the Mine Countermeasures 6.3 core program sponsored by the Office of Naval Research.

  13. Who's Got the Bridge? - Towards Safe, Robust Autonomous Operations at NASA Langley's Autonomy Incubator

    NASA Technical Reports Server (NTRS)

    Allen, B. Danette; Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Crisp, Vicki K.

    2015-01-01

    NASA aeronautics research has made decades of contributions to aviation. Both aircraft and air traffic management (ATM) systems in use today contain NASA-developed and NASA sponsored technologies that improve safety and efficiency. Recent innovations in robotics and autonomy for automobiles and unmanned systems point to a future with increased personal mobility and access to transportation, including aviation. Automation and autonomous operations will transform the way we move people and goods. Achieving this mobility will require safe, robust, reliable operations for both the vehicle and the airspace and challenges to this inevitable future are being addressed now in government labs, universities, and industry. These challenges are the focus of NASA Langley Research Center's Autonomy Incubator whose R&D portfolio includes mission planning, trajectory and path planning, object detection and avoidance, object classification, sensor fusion, controls, machine learning, computer vision, human-machine teaming, geo-containment, open architecture design and development, as well as the test and evaluation environment that will be critical to prove system reliability and support certification. Safe autonomous operations will be enabled via onboard sensing and perception systems in both data-rich and data-deprived environments. Applied autonomy will enable safety, efficiency and unprecedented mobility as people and goods take to the skies tomorrow just as we do on the road today.

  14. Future of robotic space exploration: visions and prospects

    NASA Astrophysics Data System (ADS)

    Haidegger, Tamas

    Autonomous and remote controlled mobile robots and manipulators have already proved their utility throughout several successful national and international space missions. NASA and ESA both sent robots and probes to Mars and beyond in the past years, and the Space Shuttle and Space Station Remote Manipulator Systems brought recognition to CSA. These achievements gained public attention and acknowledgement; however, all are based on technologies developed decades ago. Even the Canadian Dexter robotic arm-to be delivered to the International Space Station this year-had been completed many years ago. In the past decade robotics has become ubiquitous, and the speed of development has increased significantly, opening space for grandiose future plans of autonomous exploration missions. In the mean time, space agencies throughout the world insist on running their own costly human space flight programs. A recent workshop at NASA dealing with the issue stated that the primary reason behind US human space exploration is not science; rather the USA wants to maintain its international leadership in this field. A second space-race may fall upon us, fueled by the desire of the developing space powers to prove their capabilities, mainly driven by national pride. The aim of the paper is to introduce the upcoming unmanned space exploration scenarios that are already feasible with present day robotic technology and to show their humandriven alternatives. Astronauts are to conquer Mars in the foreseeable future, in but robots could go a lot further already. Serious engineering constraints and possibilities are to be discussed, along with issues beyond research and development. Future mission design planning must deal with both the technological and political aspects of space. Compromising on the scientific outcome may pay well by taking advantage of public awareness and nation and international interests.

  15. Demonstration of a Spoken Dialogue Interface for Planning Activities of a Semi-autonomous Robot

    NASA Technical Reports Server (NTRS)

    Dowding, John; Frank, Jeremy; Hockey, Beth Ann; Jonsson, Ari; Aist, Gregory

    2002-01-01

    Planning and scheduling in the face of uncertainty and change pushes the capabilities of both planning and dialogue technologies by requiring complex negotiation to arrive at a workable plan. Planning for use of semi-autonomous robots involves negotiation among multiple participants with competing scientific and engineering goals to co-construct a complex plan. In NASA applications this plan construction is done under severe time pressure so having a dialogue interface to the plan construction tools can aid rapid completion of the process. But, this will put significant demands on spoken dialogue technology, particularly in the areas of dialogue management and generation. The dialogue interface will need to be able to handle the complex dialogue strategies that occur in negotiation dialogues, including hypotheticals and revisions, and the generation component will require an ability to summarize complex plans. This demonstration will describe a work in progress towards building a spoken dialogue interface to the EUROPA planner for the purposes of planning and scheduling the activities of a semi-autonomous robot. A prototype interface has been built for planning the schedule of the Personal Satellite Assistant (PSA), a mobile robot designed for micro-gravity environments that is intended for use on the Space Shuttle and International Space Station. The spoken dialogue interface gives the user the capability to ask for a description of the plan, ask specific questions about the plan, and update or modify the plan. We anticipate that a spoken dialogue interface to the planner will provide a natural augmentation or alternative to the visualization interface, in situations in which the user needs very targeted information about the plan, in situations where natural language can express complex ideas more concisely than GUI actions, or in situations in which a graphical user interface is not appropriate.

  16. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    University of Waterloo (Canada) Robotics Team members test their robot on the practice field one day prior to the NASA-WPI Sample Return Robot Centennial Challenge, Friday, June 15, 2012 at the Worcester Polytechnic Institute in Worcester, Mass. Teams will compete for a $1.5 million NASA prize to build an autonomous robot that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  17. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-14

    A University of Waterloo Robotics Team member tests their robot on the practice field two days prior to the NASA-WPI Sample Return Robot Centennial Challenge, Thursday, June 14, 2012 at the Worcester Polytechnic Institute in Worcester, Mass. Teams will compete for a $1.5 million NASA prize to build an autonomous robot that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  18. Autonomous surgical robotics using 3-D ultrasound guidance: feasibility study.

    PubMed

    Whitman, John; Fronheiser, Matthew P; Ivancevich, Nikolas M; Smith, Stephen W

    2007-10-01

    The goal of this study was to test the feasibility of using a real-time 3D (RT3D) ultrasound scanner with a transthoracic matrix array transducer probe to guide an autonomous surgical robot. Employing a fiducial alignment mark on the transducer to orient the robot's frame of reference and using simple thresholding algorithms to segment the 3D images, we tested the accuracy of using the scanner to automatically direct a robot arm that touched two needle tips together within a water tank. RMS measurement error was 3.8% or 1.58 mm for an average path length of 41 mm. Using these same techniques, the autonomous robot also performed simulated needle biopsies of a cyst-like lesion in a tissue phantom. This feasibility study shows the potential for 3D ultrasound guidance of an autonomous surgical robot for simple interventional tasks, including lesion biopsy and foreign body removal.

  19. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    Intrepid Systems robot, foreground, and the University of Waterloo (Canada) robot, take to the practice field on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Robot teams will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  20. Real-Time 3D Sonar Modeling And Visualization

    DTIC Science & Technology

    1998-06-01

    looking back towards Manta sonar beam, Manta plus sonar from 1000m off track. 185 NUWC sponsor Erik Chaum Principal investigator Don Brutzman...USN Sonar Officer LT Kevin Byrne USN Intelligence Officer CPT Russell Storms USA Erik Chaum works in NUWC Code 22. He supervised the design and...McGhee, Bob, "The Phoenix Autonomous Underwater Vehicle," chapter 13, AI-BasedMobile Robots, editors David Kortenkamp, Pete Bonasso and Robin Murphy

  1. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Deputy Administrator Lori Garver, left, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  2. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Deputy Administrator Lori Garver, right, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  3. Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling Autonomous Robots

    DTIC Science & Technology

    2009-09-01

    To Appear in IEEE Robotics and Automation Magazine PREPRINT 1 Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling...Introduction We present a strategy for controlling autonomous robots that is based on principles of neuromodulation in the mammalian brain...object, ignore irrelevant distractions, and respond quickly and appropriately to the event [1]. There are separate neuromodulators that alter responses to

  4. Intelligent robots for planetary exploration and construction

    NASA Technical Reports Server (NTRS)

    Albus, James S.

    1992-01-01

    Robots capable of practical applications in planetary exploration and construction will require realtime sensory-interactive goal-directed control systems. A reference model architecture based on the NIST Real-time Control System (RCS) for real-time intelligent control systems is suggested. RCS partitions the control problem into four basic elements: behavior generation (or task decomposition), world modeling, sensory processing, and value judgment. It clusters these elements into computational nodes that have responsibility for specific subsystems, and arranges these nodes in hierarchical layers such that each layer has characteristic functionality and timing. Planetary exploration robots should have mobility systems that can safely maneuver over rough surfaces at high speeds. Walking machines and wheeled vehicles with dynamic suspensions are candidates. The technology of sensing and sensory processing has progressed to the point where real-time autonomous path planning and obstacle avoidance behavior is feasible. Map-based navigation systems will support long-range mobility goals and plans. Planetary construction robots must have high strength-to-weight ratios for lifting and positioning tools and materials in six degrees-of-freedom over large working volumes. A new generation of cable-suspended Stewart platform devices and inflatable structures are suggested for lifting and positioning materials and structures, as well as for excavation, grading, and manipulating a variety of tools and construction machinery.

  5. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    Intrepid Systems robot "MXR - Mark's Exploration Robot" takes to the practice field and tries to capture the white object in the foreground on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Intrepid Systems' robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  6. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Children visiting the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event try to catch basketballs being thrown by a robot from FIRST Robotics at Burncoat High School (Mass.) on Saturday, June 16, 2012 at WPI in Worcester, Mass. The TouchTomorrow event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  7. Spatial abstraction for autonomous robot navigation.

    PubMed

    Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon

    2015-09-01

    Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.

  8. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter.

    PubMed

    Alatise, Mary B; Hancke, Gerhard P

    2017-09-21

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).

  9. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter

    PubMed Central

    Hancke, Gerhard P.

    2017-01-01

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs). PMID:28934102

  10. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    "Harry" a Goldendoodle is seen wearing a NASA backpack during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  11. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Team members of "Survey" drive their robot around the campus on Saturday, June 16, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The Survey team was one of the final teams participating in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  12. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    Wunderkammer Laboratory Team leader Jim Rothrock, left, answers questions from 8th grade Sullivan Middle School (Mass.) students about his robot named "Cerberus" on Friday, June 15, 2012, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Rothrock's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  13. Neuro Inspired Adaptive Perception and Control for Agile Mobility of Autonomous Vehicles in Uncertain and Hostile Environments

    DTIC Science & Technology

    2017-02-08

    Georgia Tech Research Corporation 505 Tenth Street NW Atlanta, GA 30332 -0420 ABSTRACT Final Report: MURI: Neuro-Inspired Adaptive Perception and...Conquer Strategy for Optimal Trajectory Planning via Mixed-Integer Programming, IEEE Transactions on Robotics, (12 2015): 0. doi: 10.1109/TRO...Learning Day, Microsoft Corporation , Cambridge, MA, May 18, 2015. (c) Presentations 09/06/2015 09/08/2015 125 131 Ali Borji, Dicky N. Sihite, Laurent Itti

  14. Resolved motion rate and resolved acceleration servo-control of wheeled mobile robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, P.F.; Neuman, C.P.; Carnegie-Mellon Univ., Pittsburgh, PA

    1989-01-01

    Accurate motion control of wheeled mobile robots (WMRs) is required for their application to autonomous, semi-autonomous and teleoperated tasks. The similarities between WMRs and stationary manipulators suggest that current, successful, model-based manipulator control algorithms may be applied to WMRs. Special characteristics of WMRs including higher-pairs, closed-chains, friction and unactuated and unsensed joints require innovative modeling methodologies. The WMR modeling challenge has been recently overcome, thus enabling the application of manipulator control algorithms to WMRs. This realization lays the foundation for significant technology transfer from manipulator control to WMR control. We apply two Cartesian-space manipulator control algorithms: resolved motion rate (kinematics-based)more » and resolved acceleration (dynamics-based) control to WMR servo-control. We evaluate simulation studies of two exemplary WMRs: Uranus (a three degree-of-freedom WMR constructed at Carnegie Mellon University), and Bicsun-Bicas (a two degree-of-freedom WMR being constructed at Sandia National Laboratories) under the control of these algorithms. Although resolved motion rate servo-control is adequate for the control of Uranus, resolved acceleration servo-control is required for the control of the mechanically simpler Bicsun-Bicas because it exhibits more dynamic coupling and nonlinearities. Successful accurate motion control of these WMRs in simulation is driving current experimental research studies. 18 refs., 7 figs., 5 tabs.« less

  15. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  16. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Intrepid Systems Team member Mark Curry, left, talks with NASA Deputy Administrator Lori Garver and NASA Chief Technologist Mason Peck, right, about his robot named "MXR - Mark's Exploration Robot" on Saturday, June 16, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Curry's robot team was one of the final teams participating in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  17. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    Intrepid Systems Team member Mark Curry, right, answers questions from 8th grade Sullivan Middle School (Mass.) students about his robot named "MXR - Mark's Exploration Robot" on Friday, June 15, 2012, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Curry's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  18. Feasibility of Synergy-Based Exoskeleton Robot Control in Hemiplegia.

    PubMed

    Hassan, Modar; Kadone, Hideki; Ueno, Tomoyuki; Hada, Yasushi; Sankai, Yoshiyuki; Suzuki, Kenji

    2018-06-01

    Here, we present a study on exoskeleton robot control based on inter-limb locomotor synergies using a robot control method developed to target hemiparesis. The robot control is based on inter-limb locomotor synergies and kinesiological information from the non-paretic leg and a walking aid cane to generate motion patterns for the assisted leg. The developed synergy-based system was tested against an autonomous robot control system in five patients with hemiparesis and varying locomotor abilities. Three of the participants were able to walk using the robot. Results from these participants showed an improved spatial symmetry ratio and more consistent step length with the synergy-based method compared with that for the autonomous method, while the increase in the range of motion for the assisted joints was larger with the autonomous system. The kinematic synergy distribution of the participants walking without the robot suggests a relationship between each participant's synergy distribution and his/her ability to control the robot: participants with two independent synergies accounting for approximately 80% of the data variability were able to walk with the robot. This observation was not consistently apparent with conventional clinical measures such as the Brunnstrom stages. This paper contributes to the field of robot-assisted locomotion therapy by introducing the concept of inter-limb synergies, demonstrating performance differences between synergy-based and autonomous robot control, and investigating the range of disability in which the system is usable.

  19. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  20. Situational reaction and planning

    NASA Technical Reports Server (NTRS)

    Yen, John; Pfluger, Nathan

    1994-01-01

    One problem faced in designing an autonomous mobile robot system is that there are many parameters of the system to define and optimize. While these parameters can be obtained for any given situation determining what the parameters should be in all situations is difficult. The usual solution is to give the system general parameters that work in all situations, but this does not help the robot to perform its best in a dynamic environment. Our approach is to develop a higher level situation analysis module that adjusts the parameters by analyzing the goals and history of sensor readings. By allowing the robot to change the system parameters based on its judgement of the situation, the robot will be able to better adapt to a wider set of possible situations. We use fuzzy logic in our implementation to reduce the number of basic situations the controller has to recognize. For example, a situation may be 60 percent open and 40 percent corridor, causing the optimal parameters to be somewhere between the optimal settings for the two extreme situations.

  1. Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.

    2001-01-01

    The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.

  2. Realization of the FPGA-based reconfigurable computing environment by the example of morphological processing of a grayscale image

    NASA Astrophysics Data System (ADS)

    Shatravin, V.; Shashev, D. V.

    2018-05-01

    Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.

  3. Women Warriors: Why the Robotics Revolution Changes the Combat Equation

    DTIC Science & Technology

    2016-03-01

    combat. U.S. Army RDECOM PRISM 6, no. 1 FEATURES | 91 Women Warriors Why the Robotics Revolution Changes the Combat Equation1 BY LINELL A. LETENDRE...underappreciated—fac- tor is poised to alter the women in combat debate: the revolution in robotics and autonomous systems. The technology leap afforded by...developing robotic and autonomous systems and their potential impact on the future of combat. Revolution in Robotics: A Changing Battlefield20 The

  4. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Posters for the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event are seen posted around the campus on Saturday, June 16, 2012 at WPI in Worcester, Mass. The TouchTomorrow event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  5. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Panoramic of some of the exhibits available on the campus of the Worcester Polytechnic Institute (WPI) during their "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Anthony Shrout)

  6. Object recognition for autonomous robot utilizing distributed knowledge database

    NASA Astrophysics Data System (ADS)

    Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji

    2003-10-01

    In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.

  7. Estimation and Control for Autonomous Coring from a Rover Manipulator

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Backes, Paul; DiCicco, Matt; Bajracharya, Max

    2010-01-01

    A system consisting of a set of estimators and autonomous behaviors has been developed which allows robust coring from a low-mass rover platform, while accommodating for moderate rover slip. A redundant set of sensors, including a force-torque sensor, visual odometry, and accelerometers are used to monitor discrete critical and operational modes, as well as to estimate continuous drill parameters during the coring process. A set of critical failure modes pertinent to shallow coring from a mobile platform is defined, and autonomous behaviors associated with each critical mode are used to maintain nominal coring conditions. Autonomous shallow coring is demonstrated from a low-mass rover using a rotary-percussive coring tool mounted on a 5 degree-of-freedom (DOF) arm. A new architecture of using an arm-stabilized, rotary percussive tool with the robotic arm used to provide the drill z-axis linear feed is validated. Particular attention to hole start using this architecture is addressed. An end-to-end coring sequence is demonstrated, where the rover autonomously detects and then recovers from a series of slip events that exceeded 9 cm total displacement.

  8. HERMIES-I: a mobile robot for navigation and manipulation experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisbin, C.R.; Barhen, J.; de Saussure, G.

    1985-01-01

    The purpose of this paper is to report the current status of investigations ongoing at the Center for Engineering Systems Advanced Research (CESAR) in the areas of navigation and manipulation in unstructured environments. The HERMIES-I mobile robot, a prototype of a series which contains many of the major features needed for remote work in hazardous environments is discussed. Initial experimental work at CESAR has begun in the area of navigation. It briefly reviews some of the ongoing research in autonomous navigation and describes initial research with HERMIES-I and associated graphic simulation. Since the HERMIES robots will generally be composed ofmore » a variety of asynchronously controlled hardware components (such as manipulator arms, digital image sensors, sonars, etc.) it seems appropriate to consider future development of the HERMIES brain as a hypercube ensemble machine with concurrent computation and associated message passing. The basic properties of such a hypercube architecture are presented. Decision-making under uncertainty eventually permeates all of our work. Following a survey of existing analytical approaches, it was decided that a stronger theoretical basis is required. As such, this paper presents the framework for a recently developed hybrid uncertainty theory. 21 refs., 2 figs.« less

  9. Ascending Stairway Modeling: A First Step Toward Autonomous Multi-Floor Exploration

    DTIC Science & Technology

    2012-10-01

    Many robotics platforms are capable of ascending stairways, but all existing approaches for autonomous stair climbing use stairway detection as a...the rich potential of an autonomous ground robot that can climb stairs while exploring a multi-floor building. Our proposed solution to this problem is...over several steps. However, many ground robots are not capable of traversing tight spiral stairs , and so we do not focus on these types. The stairway is

  10. Planning Flight Paths of Autonomous Aerobots

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric; Elfes, Alberto; Sharma, Shivanjli

    2009-01-01

    Algorithms for planning flight paths of autonomous aerobots (robotic blimps) to be deployed in scientific exploration of remote planets are undergoing development. These algorithms are also adaptable to terrestrial applications involving robotic submarines as well as aerobots and other autonomous aircraft used to acquire scientific data or to perform surveying or monitoring functions.

  11. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Visitors, some with their dogs, line up to make their photo inside a space suit exhibit during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  12. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    The bronze statue of the goat mascot for Worcester Polytechnic Institute (WPI) named "Gompei" is seen wearing a staff t-shirt for the "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  13. How to make an autonomous robot as a partner with humans: design approach versus emergent approach.

    PubMed

    Fujita, M

    2007-01-15

    In this paper, we discuss what factors are important to realize an autonomous robot as a partner with humans. We believe that it is important to interact with people without boring them, using verbal and non-verbal communication channels. We have already developed autonomous robots such as AIBO and QRIO, whose behaviours are manually programmed and designed. We realized, however, that this design approach has limitations; therefore we propose a new approach, intelligence dynamics, where interacting in a real-world environment using embodiment is considered very important. There are pioneering works related to this approach from brain science, cognitive science, robotics and artificial intelligence. We assert that it is important to study the emergence of entire sets of autonomous behaviours and present our approach towards this goal.

  14. Toward cognitive robotics

    NASA Astrophysics Data System (ADS)

    Laird, John E.

    2009-05-01

    Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the recent integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training environments and interactive computer games. For development and testing in robotic virtual environments, Soar interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to Soar that add new memories and new non-symbolic reasoning to Soar's original symbolic processing, which should significantly improve Soar abilities for control of robots. These extensions include episodic memory, semantic memory, reinforcement learning, and mental imagery. Episodic memory and semantic memory support the learning and recalling of prior events and situations as well as facts about the world. Reinforcement learning provides the ability of the system to tune its procedural knowledge - knowledge about how to do things. Mental imagery supports the use of diagrammatic and visual representations that are critical to support spatial reasoning. We speculate on the future of unmanned systems and the need for cognitive robotics to support dynamic instruction and taskability.

  15. Robots could assist scientists working in Greenland

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2011-07-01

    GREENLAND—Tom Lane and Suk Joon Lee, recent graduates of Dartmouth University's Thayer School of Engineering, in Hanover, N. H., are standing outside in the frigid cold testing an autonomous robot that could help with scientific research and logistics in harsh polar environments. This summer, Lane, Lee, and others are at Summit Station, a U.S. National Science Foundation (NSF)-sponsored scientific research station in Greenland, fine-tuning a battery-powered Yeti robot as part of a team working on the NSF-funded Cool Robot project. The station, also known as Summit Camp, is located on the highest point of the Greenland Ice Sheet (72°N, 38°W, 3200 meters above sea level) near the middle of the island. It is a proving ground this season for putting the approximately 68-kilogram, 1-cubic-meter robot through its paces, including improving Yeti's mobility capabilities and field-testing the robot. (See the electronic supplement to this Eos issue for a video of Yeti in action (http://www.agu.org/eos_elec/).) During field-testing, plans call for the robot to collect data on elevation and snow surface characteristics, including accumulation. In addition, the robot will collect black carbon and elemental carbon particulate matter air samples around Summit Camp's power generator to help study carbon dispersion over snow.

  16. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    PubMed Central

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930

  17. Estimation of visual maps with a robot network equipped with vision sensors.

    PubMed

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  18. Tactile surface classification for limbed robots using a pressure sensitive robot skin.

    PubMed

    Shill, Jacob J; Collins, Emmanuel G; Coyle, Eric; Clark, Jonathan

    2015-02-02

    This paper describes an approach to terrain identification based on pressure images generated through direct surface contact using a robot skin constructed around a high-resolution pressure sensing array. Terrain signatures for classification are formulated from the magnitude frequency responses of the pressure images. The initial experimental results for statically obtained images show that the approach yields classification accuracies [Formula: see text]. The methodology is extended to accommodate the dynamic pressure images anticipated when a robot is walking or running. Experiments with a one-legged hopping robot yield similar identification accuracies [Formula: see text]. In addition, the accuracies are independent with respect to changing robot dynamics (i.e., when using different leg gaits). The paper further shows that the high-resolution capabilities of the sensor enables similarly textured surfaces to be distinguished. A correcting filter is developed to accommodate for failures or faults that inevitably occur within the sensing array with continued use. Experimental results show using the correcting filter can extend the effective operational lifespan of a high-resolution sensing array over 6x in the presence of sensor damage. The results presented suggest this methodology can be extended to autonomous field robots, providing a robot with crucial information about the environment that can be used to aid stable and efficient mobility over rough and varying terrains.

  19. Automated exterior inspection of an aircraft with a pan-tilt-zoom camera mounted on a mobile robot

    NASA Astrophysics Data System (ADS)

    Jovančević, Igor; Larnier, Stanislas; Orteu, Jean-José; Sentenac, Thierry

    2015-11-01

    This paper deals with an automated preflight aircraft inspection using a pan-tilt-zoom camera mounted on a mobile robot moving autonomously around the aircraft. The general topic is image processing framework for detection and exterior inspection of different types of items, such as closed or unlatched door, mechanical defect on the engine, the integrity of the empennage, or damage caused by impacts or cracks. The detection step allows to focus on the regions of interest and point the camera toward the item to be checked. It is based on the detection of regular shapes, such as rounded corner rectangles, circles, and ellipses. The inspection task relies on clues, such as uniformity of isolated image regions, convexity of segmented shapes, and periodicity of the image intensity signal. The approach is applied to the inspection of four items of Airbus A320: oxygen bay handle, air-inlet vent, static ports, and fan blades. The results are promising and demonstrate the feasibility of an automated exterior inspection.

  20. Decentralized reinforcement-learning control and emergence of motion patterns

    NASA Astrophysics Data System (ADS)

    Svinin, Mikhail; Yamada, Kazuyaki; Okhura, Kazuhiro; Ueda, Kanji

    1998-10-01

    In this paper we propose a system for studying emergence of motion patterns in autonomous mobile robotic systems. The system implements an instance-based reinforcement learning control. Three spaces are of importance in formulation of the control scheme. They are the work space, the sensor space, and the action space. Important feature of our system is that all these spaces are assumed to be continuous. The core part of the system is a classifier system. Based on the sensory state space analysis, the control is decentralized and is specified at the lowest level of the control system. However, the local controllers are implicitly connected through the perceived environment information. Therefore, they constitute a dynamic environment with respect to each other. The proposed control scheme is tested under simulation for a mobile robot in a navigation task. It is shown that some patterns of global behavior--such as collision avoidance, wall-following, light-seeking--can emerge from the local controllers.

  1. Cooperating mobile robots

    DOEpatents

    Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.; Byrne, Raymond H.

    2004-02-03

    A miniature mobile robot provides a relatively inexpensive mobile robot. A mobile robot for searching an area provides a way for multiple mobile robots in cooperating teams. A robotic system with a team of mobile robots communicating information among each other provides a way to locate a source in cooperation. A mobile robot with a sensor, a communication system, and a processor, provides a way to execute a strategy for searching an area.

  2. Multi-modal low cost mobile indoor surveillance system on the Robust Artificial Intelligence-based Defense Electro Robot (RAIDER)

    NASA Astrophysics Data System (ADS)

    Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.

  3. Positioning challenges in reconfigurable semi-autonomous robotic NDE inspection

    NASA Astrophysics Data System (ADS)

    Pierce, S. Gareth; Dobie, Gordon; Summan, Rahul; Mackenzie, Liam; Hensman, James; Worden, Keith; Hayward, Gordon

    2010-03-01

    This paper describes work conducted into mobile, wireless, semi-autonomous NDE inspection robots developed at The University of Strathclyde as part of the UK Research Centre for Non Destructive Evaluation (RCNDE). The inspection vehicles can incorporate a number of different NDE payloads including ultrasonic, eddy current, visual and magnetic based payloads, and have been developed to try and improve NDE inspection techniques in challenging inspection areas (for example oil, gas, and nuclear structures). A significant research challenge remains in the accurate positioning and guidance of such vehicles for real inspection tasks. Employing both relative and absolute position measurements, we discuss a number of approaches to position estimation including Kalman and particle filtering. Using probabilistic approaches enables a common mathematical framework to be employed for both positioning and data fusion from different NDE sensors. In this fashion the uncertainties in both position and defect identification and classification can be dealt with using a consistent approach. A number of practical constraints and considerations to different precision positioning techniques are discussed, along with NDE applications and the potential for improved inspection capabilities by utilising the inherent reconfigurable capabilities of the inspection vehicles.

  4. A New Paradigm for Robotic Rovers

    NASA Astrophysics Data System (ADS)

    Clark, P. E.; Curtis, S. A.; Rilee, M. L.

    We are in the process of developing rovers with extreme mobility needed to explore remote, rugged terrain. We call these systems Tetrahedral Explorer Technologies (TETs). Architecture is based on conformable tetrahedra, the simplest space-filling form, as building blocks, single or networked, where apices act as nodes from which struts reversibly deploy. The tetrahedral framework acts as a simple skeletal muscular structure. We have already prototyped a simple robotic walker from a single reconfigurable tetrahedron capable of tumbling and a more evolved 12Tetrahedral Walker, the Autonomous Landed Investigator (ALI), which has interior nodes for payload, more continuous motion, and is commandable through a user friendly interface. ALI is an EMS level mission concept which would allow autonomous in situ exploration of the lunar poles within the next decade. ALI would consist of one or more 12tetrahedral walkers capable of rapid locomotion with the many degrees of freedom and equipped for navigation in the unilluminated, inaccessible and thus largely unexplored rugged terrains where lunar resources are likely to be found: the Polar Regions. ALI walkers would act as roving reconnaissance teams for unexplored regions, analyzing samples along the way.

  5. Adaptive and mobile ground sensor array.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, Michael Warren; O'Rourke, William T.; Zenner, Jennifer

    The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomousmore » deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.« less

  6. Design of an autonomous teleoperated cargo transporting vehicle for lunar base operations

    NASA Technical Reports Server (NTRS)

    Holt, James; Lao, Tom; Monali, Nkoy

    1989-01-01

    At the turn of the century NASA plans to begin construction of a lunar base. The base will likely consist of developed areas (i.e., habitation, laboratory, landing and launching sites, power plant) separated from each other due to safety considerations. The Self-Repositioning Track Vehicle (SRTV) was designed to transport cargo between these base facilities. The SRTV operates by using two robotic arms to raise and position segments of track upon which the vehicle travels. The SRTV utilizes the semiautonomous mobility (SAM) method of teleoperation; actuator-controlled interlocking track sections; two robotic arms each with five degrees of freedom; and these materials: titanium for structural members and aluminum for shell members, with the possible use of light-weight, high-strength composites.

  7. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  8. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  9. A New Simulation Framework for Autonomy in Robotic Missions

    NASA Technical Reports Server (NTRS)

    Flueckiger, Lorenzo; Neukom, Christian

    2003-01-01

    Autonomy is a key factor in remote robotic exploration and there is significant activity addressing the application of autonomy to remote robots. It has become increasingly important to have simulation tools available to test the autonomy algorithms. While indus1;rial robotics benefits from a variety of high quality simulation tools, researchers developing autonomous software are still dependent primarily on block-world simulations. The Mission Simulation Facility I(MSF) project addresses this shortcoming with a simulation toolkit that will enable developers of autonomous control systems to test their system s performance against a set of integrated, standardized simulations of NASA mission scenarios. MSF provides a distributed architecture that connects the autonomous system to a set of simulated components replacing the robot hardware and its environment.

  10. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    SpacePRIDE Team members Chris Williamson, right, and Rob Moore, second from right, answer questions from 8th grade Sullivan Middle School (Mass.) students about their robot on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. SpacePRIDE's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  11. A biologically inspired meta-control navigation system for the Psikharpax rat robot.

    PubMed

    Caluwaerts, K; Staffa, M; N'Guyen, S; Grand, C; Dollé, L; Favre-Félix, A; Girard, B; Khamassi, M

    2012-06-01

    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e.g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics.

  12. Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory

    PubMed Central

    Vega, Julio; Perdices, Eduardo; Cañas, José M.

    2013-01-01

    Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333

  13. Examples of design and achievement of vision systems for mobile robotics applications

    NASA Astrophysics Data System (ADS)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  14. Mathematical Basis of Knowledge Discovery and Autonomous Intelligent Architectures - Technology for the Creation of Virtual objects in the Real World

    DTIC Science & Technology

    2005-12-14

    control of position/orientation of mobile TV cameras. 9 Unit 9 Force interaction system Unit 6 Helmet mounted displays robot like device drive...joints of the master arm (see Unit 1) which joint coordinates are tracked by the virtual manipulator. Unit 6 . Two displays built in the helmet...special device for simulating the tactile- kinaesthetic effect of immersion. When virtual body is a manipulator it comprises: − master arm with 6

  15. Object classification for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Regensburger, Uwe; Graefe, Volker

    1991-03-01

    Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.

  16. Automated Cartography by an Autonomous Mobile Robot Using Ultrasonic Range Finders

    DTIC Science & Technology

    1993-09-01

    loco.c Temporal Type: Sequential Function (xd, yd, td, 0) dirctix vehicle fou TP S~obstacle IP EP Figure A.24 - The para function Move to a... tp (type POINT), and type (type int). In the case of an fline func- tion, the path element returned is a cubic spiral or an sline depending on the...geu~nst-> tp )) I --no_o...paths; currentsroboLpath.pc = get inst->c; currentLrobot...path.type = getLinst->class; readjinsto; )*end if * if (skipjflag

  17. Development and Testing of a Hybrid Wheg (trademark)-Mobile Platform for Autonomous Surf-Zone Operations

    DTIC Science & Technology

    2011-12-01

    7 Figure 2.1 Force body diagram of a wheel. . . . . . . . . . . . . . . . . . . . . . 9 Figure 2.2 Force body diagram of a person climbing stairs ...person climbing stairs . . . . . 10 Figure 2.4 Plot of the height of center above ground vs. rotation angle for a wheel and Wheg...tail was able to climb an obstacle six centimeters higher than a similar robot with six Whegs [6].The addition of a tail shifted the center of mass

  18. Equipment Proposal for the Autonomous Vehicle Systems Laboratory at UIW

    DTIC Science & Technology

    2015-04-29

    testing, 5) 38 Lego Mindstorm EV3 and Hitechnic Sensors for use in feedback control and autonomous systems for STEM undergraduate and High School...autonomous robots using the Lego Mindstorm EV3. This robotics workshop will be used as a pilot study for next summer when more High School students

  19. Autonomous Robotic Weapons: US Army Innovation for Ground Combat in the Twenty-First Century

    DTIC Science & Technology

    2015-05-21

    2013, accessed March 29, 2015, http://www.bbc.com/news/magazine-21576376?print=true. 113 Steven Kotler, “Say Hello to Comrade Terminator: Russia’s... hello -to-comrade-terminator-russias-army-of- killer-robots/. 114 David Hambling, “Russia Wants Autonomous Fighting Robots, and Lots of Them: Putin’s...how-humans-respond-to- robots-knight/HumanRobot-PartnershipsR2.pdf?la=en. Kotler, Steven. “Say Hello to Comrade Terminator: Russia’s Army of

  20. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    A visitor to the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event helps demonstrate how a NASA rover design enables the rover to climb over obstacles higher than it's own body on Saturday, June 16, 2012 at WPI in Worcester, Mass. The event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  1. Regolith Advanced Surface Systems Operations Robot (RASSOR) Phase 2 and Smart Autonomous Sand-Swimming Excavator

    NASA Technical Reports Server (NTRS)

    Sandy, Michael

    2015-01-01

    The Regolith Advanced Surface Systems Operations Robot (RASSOR) Phase 2 is an excavation robot for mining regolith on a planet like Mars. The robot is programmed using the Robotic Operating System (ROS) and it also uses a physical simulation program called Gazebo. This internship focused on various functions of the program in order to make it a more professional and efficient robot. During the internship another project called the Smart Autonomous Sand-Swimming Excavator was worked on. This is a robot that is designed to dig through sand and extract sample material. The intern worked on programming the Sand-Swimming robot, and designing the electrical system to power and control the robot.

  2. Explosive vapor detection payload for small robots

    NASA Astrophysics Data System (ADS)

    Stimac, Phil J.; Pettit, Michael; Wetzel, John P.; Haas, John W.

    2013-05-01

    Detection of explosive hazards is a critical component of enabling and improving operational mobility and protection of US Forces. The Autonomous Mine Detection System (AMDS) developed by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is addressing this challenge for dismounted soldiers. Under the AMDS program, ARA has developed a vapor sampling system that enhances the detection of explosive residues using commercial-off-the-shelf (COTS) sensors. The Explosives Hazard Trace Detection (EHTD) payload is designed for plug-and-play installation and operation on small robotic platforms, addressing critical Army needs for more safely detecting concealed or exposed explosives in areas such as culverts, walls and vehicles. In this paper, we describe the development, robotic integration and performance of the explosive vapor sampling system, which consists of a sampling "head," a vapor transport tube and an extendable "boom." The sampling head and transport tube are integrated with the boom, allowing samples to be collected from targeted surfaces up to 7-ft away from the robotic platform. During sample collection, an IR lamp in the sampling head is used to heat a suspected object/surface and the vapors are drawn through the heated vapor transport tube to an ion mobility spectrometer (IMS) for detection. The EHTD payload is capable of quickly (less than 30 seconds) detecting explosives such as TNT, PETN, and RDX at nanogram levels on common surfaces (brick, concrete, wood, glass, etc.).

  3. ANSO study: evaluation in an indoor environment of a mobile assistance robotic grasping arm.

    PubMed

    Coignard, P; Departe, J P; Remy Neris, O; Baillet, A; Bar, A; Drean, D; Verier, A; Leroux, C; Belletante, P; Le Guiet, J L

    2013-12-01

    To evaluate the reliability and functional acceptability of the ‘‘Synthetic Autonomous Majordomo’’ (SAM) robotic aid system (a mobile Neobotix base equipped with a semi-automatic vision interface and a Manus robotic arm). An open, multicentre, controlled study. We included 29 tetraplegic patients (23 patients with spinal cord injuries, 3 with locked-in syndrome and 4 with other disorders; mean SD age: 37.83 13.3) and 34 control participants (mean SD age: 32.44 11.2). The reliability of the user interface was evaluated in three multi-step scenarios: selection of the room in which the object to be retrieved was located (in the presence or absence of visual control by the user), selection of the object to be retrieved, the grasping of the object itself and the robot’s return to the user with the object. A questionnaire was used to assess the robot’s user acceptability. The SAM system was stable and reliable: both patients and control participants experienced few failures when completing the various stages of the scenarios. The graphic interface was effective for selecting and grasping the object – even in the absence of visual control. Users and carers were generally satisfied with SAM, although only a quarter of patients said that they would consider using the robot in their activities of daily living. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  4. Floor Covering and Surface Identification for Assistive Mobile Robotic Real-Time Room Localization Application

    PubMed Central

    Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben

    2013-01-01

    Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification. PMID:24351647

  5. Floor covering and surface identification for assistive mobile robotic real-time room localization application.

    PubMed

    Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben

    2013-12-17

    Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification.

  6. A Method on Dynamic Path Planning for Robotic Manipulator Autonomous Obstacle Avoidance Based on an Improved RRT Algorithm.

    PubMed

    Wei, Kun; Ren, Bingyin

    2018-02-13

    In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.

  7. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  8. Development of a semi-autonomous service robot with telerobotic capabilities

    NASA Technical Reports Server (NTRS)

    Jones, J. E.; White, D. R.

    1987-01-01

    The importance to the United States of semi-autonomous systems for application to a large number of manufacturing and service processes is very clear. Two principal reasons emerge as the primary driving forces for development of such systems: enhanced national productivity and operation in environments whch are hazardous to humans. Completely autonomous systems may not currently be economically feasible. However, autonomous systems that operate in a limited operation domain or that are supervised by humans are within the technology capability of this decade and will likely provide reasonable return on investment. The two research and development efforts of autonomy and telerobotics are distinctly different, yet interconnected. The first addresses the communication of an intelligent electronic system with a robot while the second requires human communication and ergonomic consideration. Discussed here are work in robotic control, human/robot team implementation, expert system robot operation, and sensor development by the American Welding Institute, MTS Systems Corporation, and the Colorado School of Mines--Center for Welding Research.

  9. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    NASA Astrophysics Data System (ADS)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  10. Controlling the autonomy of a reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David

    2004-09-01

    In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.

  11. Spectrally Queued Feature Selection for Robotic Visual Odometery

    DTIC Science & Technology

    2010-11-23

    in these systems has yet to be defined. 1. INTRODUCTION 1.1 Uses of Autonomous Vehicles Autonomous vehicles have a wide range of possible...applications. In military situations, autonomous vehicles are valued for their ability to keep Soldiers far away from danger. A robot can inspect and disarm...just a glimpse of what engineers are hoping for in the future. 1.2 Biological Influence Autonomous vehicles are becoming more of a possibility in

  12. Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor

    NASA Technical Reports Server (NTRS)

    Prinz, F. B.

    1991-01-01

    Sensor based robot motion planning research has primarily focused on mobile robots. Consider, however, the case of a robot manipulator expected to operate autonomously in a dynamic environment where unexpected collisions can occur with many parts of the robot. Only a sensor based system capable of generating collision free paths would be acceptable in such situations. Recently, work in this area has been reported in which a deterministic solution for 2DOF systems has been generated. The arm was sensitized with 'skin' of infra-red sensors. We have proposed a heuristic (potential field based) methodology for redundant robots with large DOF's. The key concepts are solving the path planning problem by cooperating global and local planning modules, the use of complete information from the sensors and partial (but appropriate) information from a world model, representation of objects with hyper-ellipsoids in the world model, and the use of variational planning. We intend to sensitize the robot arm with a 'skin' of capacitive proximity sensors. These sensors were developed at NASA, and are exceptionally suited for the space application. In the first part of the report, we discuss the development and modeling of the capacitive proximity sensor. In the second part we discuss the motion planning algorithm.

  13. Advances in Robotic, Human, and Autonomous Systems for Missions of Space Exploration

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R.; Briggs, Geoffrey A.; Glass, Brian J.; Pedersen, Liam; Kortenkamp, David M.; Wettergreen, David S.; Nourbakhsh, I.; Clancy, Daniel J.; Zornetzer, Steven (Technical Monitor)

    2002-01-01

    Space exploration missions are evolving toward more complex architectures involving more capable robotic systems, new levels of human and robotic interaction, and increasingly autonomous systems. How this evolving mix of advanced capabilities will be utilized in the design of new missions is a subject of much current interest. Cost and risk constraints also play a key role in the development of new missions, resulting in a complex interplay of a broad range of factors in the mission development and planning of new missions. This paper will discuss how human, robotic, and autonomous systems could be used in advanced space exploration missions. In particular, a recently completed survey of the state of the art and the potential future of robotic systems, as well as new experiments utilizing human and robotic approaches will be described. Finally, there will be a discussion of how best to utilize these various approaches for meeting space exploration goals.

  14. Development of autonomous grasping and navigating robot

    NASA Astrophysics Data System (ADS)

    Kudoh, Hiroyuki; Fujimoto, Keisuke; Nakayama, Yasuichi

    2015-01-01

    The ability to find and grasp target items in an unknown environment is important for working robots. We developed an autonomous navigating and grasping robot. The operations are locating a requested item, moving to where the item is placed, finding the item on a shelf or table, and picking the item up from the shelf or the table. To achieve these operations, we designed the robot with three functions: an autonomous navigating function that generates a map and a route in an unknown environment, an item position recognizing function, and a grasping function. We tested this robot in an unknown environment. It achieved a series of operations: moving to a destination, recognizing the positions of items on a shelf, picking up an item, placing it on a cart with its hand, and returning to the starting location. The results of this experiment show the applicability of reducing the workforce with robots.

  15. Designing of routing algorithms in autonomous distributed data transmission system for mobile computing devices with ‘WiFi-Direct’ technology

    NASA Astrophysics Data System (ADS)

    Nikitin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.; Botygin, I. A.

    2017-02-01

    The results of the research of existent routing protocols in wireless networks and their main features are discussed in the paper. Basing on the protocol data, the routing protocols in wireless networks, including search routing algorithms and phone directory exchange algorithms, are designed with the ‘WiFi-Direct’ technology. Algorithms without IP-protocol were designed, and that enabled one to increase the efficiency of the algorithms while working only with the MAC-addresses of the devices. The developed algorithms are expected to be used in the mobile software engineering with the Android platform taken as base. Easier algorithms and formats of the well-known route protocols, rejection of the IP-protocols enables to use the developed protocols on more primitive mobile devices. Implementation of the protocols to the engineering industry enables to create data transmission networks among working places and mobile robots without any access points.

  16. Autonomous planetary rover at Carnegie Mellon

    NASA Technical Reports Server (NTRS)

    Whittaker, William; Kanade, Takeo; Mitchell, Tom

    1990-01-01

    This report describes progress in research on an autonomous robot for planetary exploration. In 1989, the year covered by this report, a six-legged walking robot, the Ambler, was configured, designed, and constructed. This configuration was used to overcome shortcomings exhibited by existing wheeled and walking robot mechanisms. The fundamental advantage of the Ambler is that the actuators for body support are independent of those for propulsion; a subset of the planar joints propel the body, and the vertical actuators support and level the body over terrain. Models of the Ambler's dynamics were developed and the leveling control was studied. An integrated system capable of walking with a single leg over rugged terrain was implemented and tested. A prototype of an Ambler leg is suspended below a carriage that slides along rails. To walk, the system uses a laser scanner to find a clear, flat foothold, positions the leg above the foothold, contacts the terrain with the foot, and applies force enough to advance the carriage along the rails. Walking both forward and backward, the system has traversed hundreds of meters of rugged terrain including obstacles too tall to step over, trenches too deep to step in, closely spaced rocks, and sand hills. In addition, preliminary experiments were conducted with concurrent planning and execution, and a leg recovery planner that generates time and power efficient 3D trajectories using 2D search was developed. A Hero robot was used to demonstrate mobile manipulation. Indoor tasks include collecting cups from the lab floor, retrieving printer output, and recharging when its battery gets low. The robot monitors its environment, and handles exceptional conditions in a robust fashion, using vision to track the appearance and disappearance of cups, onboard sonars to detect imminent collisions, and monitors to detect the battery level.

  17. Autonomy in robots and other agents.

    PubMed

    Smithers, T

    1997-06-01

    The word "autonomous" has become widely used in artificial intelligence, robotics, and, more recently, artificial life and is typically used to qualify types of systems, agents, or robots: we see terms like "autonomous systems," "autonomous agents," and "autonomous robots." Its use in these fields is, however, both weak, with no distinctions being made that are not better and more precisely made with other existing terms, and varied, with no single underlying concept being involved. This ill-disciplined usage contrasts strongly with the use of the same term in other fields such as biology, philosophy, ethics, law, and human rights, for example. In all these quite different areas the concept of autonomy is essentially the same, though the language used and the aspects and issues of concern, of course, differ. In all these cases the underlying notion is one of self-law making and the closely related concept of self-identity. In this paper I argue that the loose and varied use of the term autonomous in artificial intelligence, robotics, and artificial life has effectively robbed these fields of an important concept. A concept essentially the same as we find it in biology, philosophy, ethics, and law, and one that is needed to distinguish a particular kind of agent or robot from those developed and built so far. I suggest that robots and other agents will have to be autonomous, i.e., self-law making, not just self-regulating, if they are to be able effectively to deal with the kinds of environments in which we live and work: environments which have significant large scale spatial and temporal invariant structure, but which also have large amounts of local spatial and temporal dynamic variation and unpredictability, and which lead to the frequent occurrence of previously unexperienced situations for the agents that interact with them.

  18. Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits

    NASA Astrophysics Data System (ADS)

    Snider, Ross K.; Arathorn, David W.

    2006-05-01

    A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.

  19. The effect of collision avoidance for autonomous robot team formation

    NASA Astrophysics Data System (ADS)

    Seidman, Mark H.; Yang, Shanchieh J.

    2007-04-01

    As technology and research advance to the era of cooperative robots, many autonomous robot team algorithms have emerged. Shape formation is a common and critical task in many cooperative robot applications. While theoretical studies of robot team formation have shown success, it is unclear whether such algorithms will perform well in a real-world environment. This work examines the effect of collision avoidance schemes on an ideal circle formation algorithm, but behaves similarly if robot-to-robot communications are in place. Our findings reveal that robots with basic collision avoidance capabilities are still able to form into a circle, under most conditions. Moreover, the robot sizes, sensing ranges, and other critical physical parameters are examined to determine their effects on algorithm's performance.

  20. An integrated design and fabrication strategy for entirely soft, autonomous robots.

    PubMed

    Wehner, Michael; Truby, Ryan L; Fitzgerald, Daniel J; Mosadegh, Bobak; Whitesides, George M; Lewis, Jennifer A; Wood, Robert J

    2016-08-25

    Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials. Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources. New strategies for creating completely soft robots, including soft analogues of these crucial components, are needed to realize their full potential. Here we report the untethered operation of a robot composed solely of soft materials. The robot is controlled with microfluidic logic that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply. Gas generated from the fuel decomposition inflates fluidic networks downstream of the reaction sites, resulting in actuation. The body and microfluidic logic of the robot are fabricated using moulding and soft lithography, respectively, and the pneumatic actuator networks, on-board fuel reservoirs and catalytic reaction chambers needed for movement are patterned within the body via a multi-material, embedded 3D printing technique. The fluidic and elastomeric architectures required for function span several orders of magnitude from the microscale to the macroscale. Our integrated design and rapid fabrication approach enables the programmable assembly of multiple materials within this architecture, laying the foundation for completely soft, autonomous robots.

  1. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Program Manager for Centennial Challenges Sam Ortega help show a young visitor how to drive a rover as part of the interactive NASA Mars rover exhibit during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  2. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Deputy Administrator Lori Garver and NASA Chief Technologist Mason Peck stop to look at the bronze statue of the goat mascot for Worcester Polytechnic Institute (WPI) named "Gompei" that is wearing a staff t-shirt for the "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  3. 3D Laser Scanner for Underwater Manipulation.

    PubMed

    Palomer, Albert; Ridao, Pere; Youakim, Dina; Ribas, David; Forest, Josep; Petillot, Yvan

    2018-04-04

    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank.

  4. Autonomous Legged Hill and Stairwell Ascent

    DTIC Science & Technology

    2011-11-01

    environments with little burden to a human operator. Keywords: autonomous robot , hill climbing , stair climbing , sequential composition, hexapod, self...X-RHex robot on a set of stairs with laser scanner, IMU, wireless repeater, and handle payloads. making them useful for both climbing hills and...reconciliation into that more powerful (but restrictive) framework. 1) The Stair Climbing Behavior: RHex robots have been climbing single-flight stairs

  5. Semi autonomous mine detection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas Few; Roelof Versteeg; Herman Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIKmore » was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.« less

  6. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    Team KuuKulgur waits to begin the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  7. Mechatronic description of a laser autoguided vehicle for greenhouse operations.

    PubMed

    Sánchez-Hermosilla, Julián; González, Ramón; Rodríguez, Francisco; Donaire, Julián G

    2013-01-08

    This paper presents a novel approach for guiding mobile robots inside greenhouses demonstrated by promising preliminary physical experiments. It represents a comprehensive attempt to use the successful principles of AGVs (auto-guided vehicles) inside greenhouses, but avoiding the necessity of modifying the crop layout, and avoiding having to bury metallic pipes in the greenhouse floor. The designed vehicle can operate different tools, e.g., a spray system for applying plant-protection product, a lifting platform to reach the top part of the plants to perform pruning and harvesting tasks, and a trailer to transport fruits, plants, and crop waste. Regarding autonomous navigation, it follows the idea of AGVs, but now laser emitters are used to mark the desired route. The vehicle development is analyzed from a mechatronic standpoint (mechanics, electronics, and autonomous control).

  8. Two modular neuro-fuzzy system for mobile robot navigation

    NASA Astrophysics Data System (ADS)

    Bobyr, M. V.; Titov, V. S.; Kulabukhov, S. A.; Syryamkin, V. I.

    2018-05-01

    The article considers the fuzzy model for navigation of a mobile robot operating in two modes. In the first mode the mobile robot moves along a line. In the second mode, the mobile robot looks for an target in unknown space. Structural and schematic circuit of four-wheels mobile robot are presented in the article. The article describes the movement of a mobile robot based on two modular neuro-fuzzy system. The algorithm of neuro-fuzzy inference used in two modular control system for movement of a mobile robot is given in the article. The experimental model of the mobile robot and the simulation of the neuro-fuzzy algorithm used for its control are presented in the article.

  9. Intuitive control of mobile robots: an architecture for autonomous adaptive dynamic behaviour integration.

    PubMed

    Melidis, Christos; Iizuka, Hiroyuki; Marocco, Davide

    2018-05-01

    In this paper, we present a novel approach to human-robot control. Taking inspiration from behaviour-based robotics and self-organisation principles, we present an interfacing mechanism, with the ability to adapt both towards the user and the robotic morphology. The aim is for a transparent mechanism connecting user and robot, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the case where the user has to read and understand an operation manual, or it has to learn to operate a specific device. Starting from a tabula rasa basis, the architecture is able to identify control patterns (behaviours) for the given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. The structural components of the interface are presented and assessed both individually and as a whole. Inherent properties of the architecture are presented and explained. At the same time, emergent properties are presented and investigated. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.

  10. BatSLAM: Simultaneous localization and mapping using biomimetic sonar.

    PubMed

    Steckel, Jan; Peremans, Herbert

    2013-01-01

    We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building.

  11. BatSLAM: Simultaneous Localization and Mapping Using Biomimetic Sonar

    PubMed Central

    Steckel, Jan; Peremans, Herbert

    2013-01-01

    We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building. PMID:23365647

  12. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  13. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  14. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  15. Decentralized control scheme for myriapod robot inspired by adaptive and resilient centipede locomotion.

    PubMed

    Yasui, Kotaro; Sakai, Kazuhiko; Kano, Takeshi; Owaki, Dai; Ishiguro, Akio

    2017-01-01

    Recently, myriapods have attracted the attention of engineers because mobile robots that mimic them potentially have the capability of producing highly stable, adaptive, and resilient behaviors. The major challenge here is to develop a control scheme that can coordinate their numerous legs in real time, and an autonomous decentralized control could be the key to solve this problem. Therefore, we focus on real centipedes and aim to design a decentralized control scheme for myriapod robots by drawing inspiration from behavioral experiments on centipede locomotion under unusual conditions. In the behavioral experiments, we observed the response to the removal of a part of the terrain and to amputation of several legs. Further, we determined that the ground reaction force is significant for generating rhythmic leg movements; the motion of each leg is likely affected by a sensory input from its neighboring legs. Thus, we constructed a two-dimensional model wherein a simple local reflexive mechanism was implemented in each leg. We performed simulations by using this model and demonstrated that the myriapod robot could move adaptively to changes in the environment and body properties. Our findings will shed new light on designing adaptive and resilient myriapod robots that can function under various circumstances.

  16. Thermal tracking in mobile robots for leak inspection activities.

    PubMed

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-10-09

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.

  17. Thermal Tracking in Mobile Robots for Leak Inspection Activities

    PubMed Central

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-01-01

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684

  18. Dusty: an assistive mobile manipulator that retrieves dropped objects for people with motor impairments.

    PubMed

    King, Chih-Hung; Chen, Tiffany L; Fan, Zhengqin; Glass, Jonathan D; Kemp, Charles C

    2012-03-01

    People with physical disabilities have ranked object retrieval as a high-priority task for assistive robots. We have developed Dusty, a teleoperated mobile manipulator that fetches objects from the floor and delivers them to users at a comfortable height. In this paper, we first demonstrate the robot's high success rate (98.4%) when autonomously grasping 25 objects considered being important by people with amyotrophic lateral sclerosis (ALS). We tested the robot with each object in five different configurations on five types of flooring. We then present the results of an experiment in which 20 people with ALS operated Dusty. Participants teleoperated Dusty to move around an obstacle, pick up an object and deliver the object to themselves. They successfully completed this task in 59 out of 60 trials (3 trials each) with a mean completion time of 61.4 SD = 20.5 seconds), and reported high overall satisfaction using Dusty (7-point Likert scale; 6.8 SD = 0.6). Participants rated Dusty to be significantly easier to use than their own hands, asking family members, and using mechanical reachers (p < 0.03, paired t-tests). Fourteen of the 20 participants reported that they would prefer using Dusty over their current methods. [Box: see text].

  19. Dusty: an assistive mobile manipulator that retrieves dropped objects for people with motor impairments

    PubMed Central

    King, Chih-Hung; Chen, Tiffany L; Fan, Zhengqin; Glass, Jonathan D; Kemp, Charles C

    2012-01-01

    People with physical disabilities have ranked object retrieval as a high priority task for assistive robots. We have developed Dusty, a teleoperated mobile manipulator that fetches objects from the floor and delivers them to users at a comfortable height. In this paper, we first demonstrate the robot's high success rate (98.4%) when autonomously grasping 25 objects considered important by people with amyotrophic lateral sclerosis (ALS). We tested the robot with each object in five different configurations on five types of flooring. We then present the results of an experiment in which 20 people with ALS operated Dusty. Participants teleoperated Dusty to move around an obstacle, pick up an object, and deliver the object to themselves. They successfully completed this task in 59 out of 60 trials (3 trials each) with a mean completion time of 61.4 seconds (SD=20.5 seconds), and reported high overall satisfaction using Dusty (7-point Likert scale; 6.8 SD=0.6). Participants rated Dusty to be significantly easier to use than their own hands, asking family members, and using mechanical reachers (p < 0.03, paired t-tests). 14 of the 20 participants reported that they would prefer using Dusty over their current methods. PMID:22013888

  20. Animal-to-robot social attachment: initial requisites in a gallinaceous bird.

    PubMed

    Jolly, L; Pittet, F; Caudal, J-P; Mouret, J-B; Houdelier, C; Lumineau, S; de Margerie, E

    2016-02-04

    Animal-Robot Interaction experiments have demonstrated their usefulness to understand the social behaviour of a growing number of animal species. In order to study the mechanisms of social influences (from parents and peers) on behavioural development, we design an experimental setup where young quail chicks, after hatching, continuously live with autonomous mobile robots in mixed triadic groups of two chicks and one robot. As precocial birds are subject to imprinting, we compare groups where chicks meet the robot as their very first social partner, on their first day after hatching (R chicks), with groups where chicks meet a real conspecific first (C chicks), and the robot later (on the second day after hatching). We measured the behavioural synchronization between chicks and robot over three days. Afterwards, we directly tested the existence of a possible social bond between animal and robot, by performing separation-reunion behavioural tests. R chicks were more synchronized with the robot in their daily feeding-resting activities than C chicks. Moreover, R chicks emitted numerous distress calls when separated from the robot, even in the presence of another chick, whereas C chicks emitted calls only when separated from the other chick. Whether the observed chick-robot attachment bond reflects filial, or sibling-imprinting of chicks towards the robot remains unclear, as the latter process is not fully understood in natural familial groups. Still, these results reveal the necessary initial conditions for stable, cohesive mixed groups of chicks and robots, a promising tool to experiment on the long-term dynamics of social behaviour.

  1. ARV robotic technologies (ART): a risk reduction effort for future unmanned systems

    NASA Astrophysics Data System (ADS)

    Jaster, Jeffrey F.

    2006-05-01

    The Army's ARV (Armed Robotic Vehicle) Robotic Technologies (ART) program is working on the development of various technological thrusts for use in the robotic forces of the future. The ART program will develop, integrate and demonstrate the technology required to advance the maneuver technologies (i.e., perception, mobility, tactical behaviors) and increase the survivability of unmanned platforms for the future force while focusing on reducing the soldiers' burden by providing an increase in vehicle autonomy coinciding with a decrease in the total number user interventions required to control the unmanned assets. This program will advance the state of the art in perception technologies to provide the unmanned platform an increasingly accurate view of the terrain that surrounds it; while developing tactical/mission behavior technologies to provide the Unmanned Ground Vehicle (UGV) the capability to maneuver tactically, in conjunction with the manned systems in an autonomous mode. The ART testbed will be integrated with the advanced technology software and associated hardware developed under this effort, and incorporate appropriate mission modules (e.g. RSTA sensors, MILES, etc.) to support Warfighter experiments and evaluations (virtual and field) in a military significant environment (open/rolling and complex/urban terrain). The outcome of these experiments as well as other lessons learned through out the program life cycle will be used to reduce the current risks that are identified for the future UGV systems that will be developed under the Future Combat Systems (FCS) program, including the early integration of an FCS-like autonomous navigation system onto a tracked skid steer platform.

  2. Behavior-based multi-robot collaboration for autonomous construction tasks

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew

    2005-01-01

    The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.

  3. Two-dimensional laser servoing for precision motion control of an ODV robotic license plate recognition system

    NASA Astrophysics Data System (ADS)

    Song, Zhen; Moore, Kevin L.; Chen, YangQuan; Bahl, Vikas

    2003-09-01

    As an outgrowth of series of projects focused on mobility of unmanned ground vehicles (UGV), an omni-directional (ODV), multi-robot, autonomous mobile parking security system has been developed. The system has two types of robots: the low-profile Omni-Directional Inspection System (ODIS), which can be used for under-vehicle inspections, and the mid-sized T4 robot, which serves as a ``marsupial mothership'' for the ODIS vehicles and performs coarse resolution inspection. A key task for the T4 robot is license plate recognition (LPR). For a successful LPR task without compromising the recognition rate, the robot must be able to identify the bumper locations of vehicles in the parking area and then precisely position the LPR camera relative to the bumper. This paper describes a 2D-laser scanner based approach to bumper identification and laser servoing for the T4 robot. The system uses a gimbal-mounted scanning laser. As the T4 robot travels down a row of parking stalls, data is collected from the laser every 100ms. For each parking stall in the range of the laser during the scan, the data is matched to a ``bumper box'' corresponding to where a car bumper is expected, resulting in a point cloud of data corresponding to a vehicle bumper for each stall. Next, recursive line-fitting algorithms are used to determine a line for the data in each stall's ``bumper box.'' The fitting technique uses Hough based transforms, which are robust against segmentation problems and fast enough for real-time line fitting. Once a bumper line is fitted with an acceptable confidence, the bumper location is passed to the T4 motion controller, which moves to position the LPR camera properly relative to the bumper. The paper includes examples and results that show the effectiveness of the technique, including its ability to work in real-time.

  4. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1990-01-01

    A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.

  5. An autonomous satellite architecture integrating deliberative reasoning and behavioural intelligence

    NASA Technical Reports Server (NTRS)

    Lindley, Craig A.

    1993-01-01

    This paper describes a method for the design of autonomous spacecraft, based upon behavioral approaches to intelligent robotics. First, a number of previous spacecraft automation projects are reviewed. A methodology for the design of autonomous spacecraft is then presented, drawing upon both the European Space Agency technological center (ESTEC) automation and robotics methodology and the subsumption architecture for autonomous robots. A layered competency model for autonomous orbital spacecraft is proposed. A simple example of low level competencies and their interaction is presented in order to illustrate the methodology. Finally, the general principles adopted for the control hardware design of the AUSTRALIS-1 spacecraft are described. This system will provide an orbital experimental platform for spacecraft autonomy studies, supporting the exploration of different logical control models, different computational metaphors within the behavioral control framework, and different mappings from the logical control model to its physical implementation.

  6. Autonomous assistance navigation for robotic wheelchairs in confined spaces.

    PubMed

    Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F

    2010-01-01

    In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.

  7. Crew/Robot Coordinated Planetary EVA Operations at a Lunar Base Analog Site

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Bluethmann, W. J.; Delgado, F. J.; Herrera, E.; Kosmo, J. J.; Janoiko, B. A.; Wilcox, B. H.; Townsend, J. A.; Matthews, J. B.; hide

    2007-01-01

    Under the direction of NASA's Exploration Technology Development Program, robots and space suited subjects from several NASA centers recently completed a very successful demonstration of coordinated activities indicative of base camp operations on the lunar surface. For these activities, NASA chose a site near Meteor Crater, Arizona close to where Apollo Astronauts previously trained. The main scenario demonstrated crew returning from a planetary EVA (extra-vehicular activity) to a temporary base camp and entering a pressurized rover compartment while robots performed tasks in preparation for the next EVA. Scenario tasks included: rover operations under direct human control and autonomous modes, crew ingress and egress activities, autonomous robotic payload removal and stowage operations under both local control and remote control from Houston, and autonomous robotic navigation and inspection. In addition to the main scenario, participants had an opportunity to explore additional robotic operations: hill climbing, maneuvering heaving loads, gathering geo-logical samples, drilling, and tether operations. In this analog environment, the suited subjects and robots experienced high levels of dust, rough terrain, and harsh lighting.

  8. Comparative analysis of ROS-based monocular SLAM methods for indoor navigation

    NASA Astrophysics Data System (ADS)

    Buyval, Alexander; Afanasyev, Ilya; Magid, Evgeni

    2017-03-01

    This paper presents a comparison of four most recent ROS-based monocular SLAM-related methods: ORB-SLAM, REMODE, LSD-SLAM, and DPPTAM, and analyzes their feasibility for a mobile robot application in indoor environment. We tested these methods using video data that was recorded from a conventional wide-angle full HD webcam with a rolling shutter. The camera was mounted on a human-operated prototype of an unmanned ground vehicle, which followed a closed-loop trajectory. Both feature-based methods (ORB-SLAM, REMODE) and direct SLAMrelated algorithms (LSD-SLAM, DPPTAM) demonstrated reasonably good results in detection of volumetric objects, corners, obstacles and other local features. However, we met difficulties with recovering typical for offices homogeneously colored walls, since all of these methods created empty spaces in a reconstructed sparse 3D scene. This may cause collisions of an autonomously guided robot with unfeatured walls and thus limits applicability of maps, which are obtained by the considered monocular SLAM-related methods for indoor robot navigation.

  9. Guaranteeing safety in spatially situated agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohout, R.C.; Hendler, J.A.; Musliner, D.J.

    1996-12-31

    {open_quote}Mission-critical{close_quotes} systems, which include such diverse applications as nuclear power plant controllers, {open_quotes}fly-by-wire{close_quotes} airplanes, medical care and monitoring systems, and autonomous mobile vehicles, are characterized by the fact that system failure is potentially catastrophic. The high cost of failure justifies the expenditure of considerable effort at design-time in order to guarantee the correctness of system behavior. This paper examines the problem of guaranteeing safety in a well studied class of robot motion problems known as the {open_quotes}asteroid avoidance problem.{close_quotes} We establish necessary and sufficient conditions for ensuring safety in the simple version of this problem which occurs most frequently inmore » the literature, as well as sufficient conditions for a more general and realistic case. In doing so, we establish functional relationships between the number, size and speed of obstacles, the robot`s maximum speed and the conditions which must be maintained in order to ensure safety.« less

  10. Bioinspired microrobots

    NASA Astrophysics Data System (ADS)

    Palagi, Stefano; Fischer, Peer

    2018-06-01

    Microorganisms can move in complex media, respond to the environment and self-organize. The field of microrobotics strives to achieve these functions in mobile robotic systems of sub-millimetre size. However, miniaturization of traditional robots and their control systems to the microscale is not a viable approach. A promising alternative strategy in developing microrobots is to implement sensing, actuation and control directly in the materials, thereby mimicking biological matter. In this Review, we discuss design principles and materials for the implementation of robotic functionalities in microrobots. We examine different biological locomotion strategies, and we discuss how they can be artificially recreated in magnetic microrobots and how soft materials improve control and performance. We show that smart, stimuli-responsive materials can act as on-board sensors and actuators and that `active matter' enables autonomous motion, navigation and collective behaviours. Finally, we provide a critical outlook for the field of microrobotics and highlight the challenges that need to be overcome to realize sophisticated microrobots, which one day might rival biological machines.

  11. Fully decentralized control of a soft-bodied robot inspired by true slime mold.

    PubMed

    Umedachi, Takuya; Takeda, Koichi; Nakagaki, Toshiyuki; Kobayashi, Ryo; Ishiguro, Akio

    2010-03-01

    Animals exhibit astoundingly adaptive and supple locomotion under real world constraints. In order to endow robots with similar capabilities, we must implement many degrees of freedom, equivalent to animals, into the robots' bodies. For taming many degrees of freedom, the concept of autonomous decentralized control plays a pivotal role. However a systematic way of designing such autonomous decentralized control system is still missing. Aiming at understanding the principles that underlie animals' locomotion, we have focused on a true slime mold, a primitive living organism, and extracted a design scheme for autonomous decentralized control system. In order to validate this design scheme, this article presents a soft-bodied amoeboid robot inspired by the true slime mold. Significant features of this robot are twofold: (1) the robot has a truly soft and deformable body stemming from real-time tunable springs and protoplasm, the former is used for an outer skin of the body and the latter is to satisfy the law of conservation of mass; and (2) fully decentralized control using coupled oscillators with completely local sensory feedback mechanism is realized by exploiting the long-distance physical interaction between the body parts stemming from the law of conservation of protoplasmic mass. Simulation results show that this robot exhibits highly supple and adaptive locomotion without relying on any hierarchical structure. The results obtained are expected to shed new light on design methodology for autonomous decentralized control system.

  12. Tandem robot control system and method for controlling mobile robots in tandem

    DOEpatents

    Hayward, David R.; Buttz, James H.; Shirey, David L.

    2002-01-01

    A control system for controlling mobile robots provides a way to control mobile robots, connected in tandem with coupling devices, to navigate across difficult terrain or in closed spaces. The mobile robots can be controlled cooperatively as a coupled system in linked mode or controlled individually as separate robots.

  13. Issues Concerning The Development Of A Mobile Platform For Health Care Applications

    NASA Astrophysics Data System (ADS)

    Korba, Larry W.; Liscano, Ramiro; Green, David; Durie, Nelson

    1989-03-01

    There are a number of problems that must yet be overcome before robotic technology can be applied in a hospital or a home care setting. The four basic problems are: cost, safety, finding appropriate applications and developing application specific solutions. Advanced robotics technology is now costly because of the complexity associated with autonomous systems. In any application, it is most important that the safety of the individuals using or exposed to the vehicle is ensured. Often in the health care field, innovative and useful new devices require an inordinate amount of time before they are accepted. The technical and ergonomic problems associated with any application must be solved so that cost containment, safety, ease of use, and quality of life are ensured. This paper discusses these issues in relation to our own development of an autonomous vehicle for health care applications. In this advancement, a commercially available platform is being equipped with an on-board, multiprocessor computer system and a variety of sensor systems. In order to develop pertinent solutions to the technical problems, there must be a framework wherein there is a focus upon the practical issues associated with the end application.

  14. A Segway RMP-based robotic transport system

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa G.; Kogut, Greg; Barua, Ripan; Burmeister, Aaron; Pezeshkian, Narek; Powell, Darren; Farrington, Nathan; Wimmer, Matt; Cicchetto, Brett; Heng, Chana; Ramirez, Velia

    2004-12-01

    In the area of logistics, there currently is a capability gap between the one-ton Army robotic Multifunction Utility/Logistics and Equipment (MULE) vehicle and a soldier"s backpack. The Unmanned Systems Branch at Space and Naval Warfare Systems Center (SPAWAR Systems Center, or SSC), San Diego, with the assistance of a group of interns from nearby High Tech High School, has demonstrated enabling technologies for a solution that fills this gap. A small robotic transport system has been developed based on the Segway Robotic Mobility Platform (RMP). We have demonstrated teleoperated control of this robotic transport system, and conducted two demonstrations of autonomous behaviors. Both demonstrations involved a robotic transporter following a human leader. In the first demonstration, the transporter used a vision system running a continuously adaptive mean-shift filter to track and follow a human. In the second demonstration, the separation between leader and follower was significantly increased using Global Positioning System (GPS) information. The track of the human leader, with a GPS unit in his backpack, was sent wirelessly to the transporter, also equipped with a GPS unit. The robotic transporter traced the path of the human leader by following these GPS breadcrumbs. We have additionally demonstrated a robotic medical patient transport capability by using the Segway RMP to power a mock-up of the Life Support for Trauma and Transport (LSTAT) patient care platform, on a standard NATO litter carrier. This paper describes the development of our demonstration robotic transport system and the various experiments conducted.

  15. Methods of determining complete sensor requirements for autonomous mobility

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A. (Inventor)

    2012-01-01

    A method of determining complete sensor requirements for autonomous mobility of an autonomous system includes computing a time variation of each behavior of a set of behaviors of the autonomous system, determining mobility sensitivity to each behavior of the autonomous system, and computing a change in mobility based upon the mobility sensitivity to each behavior and the time variation of each behavior. The method further includes determining the complete sensor requirements of the autonomous system through analysis of the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior, wherein the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior are characteristic of the stability of the autonomous system.

  16. Full autonomous microline trace robot

    NASA Astrophysics Data System (ADS)

    Yi, Deer; Lu, Si; Yan, Yingbai; Jin, Guofan

    2000-10-01

    Optoelectric inspection may find applications in robotic system. In micro robotic system, smaller optoelectric inspection system is preferred. However, as miniaturizing the size of the robot, the number of the optoelectric detector becomes lack. And lack of the information makes the micro robot difficult to acquire its status. In our lab, a micro line trace robot has been designed, which autonomous acts based on its optoelectric detection. It has been programmed to follow a black line printed on the white colored ground. Besides the optoelectric inspection, logical algorithm in the microprocessor is also important. In this paper, we propose a simply logical algorithm to realize robot's intelligence. The robot's intelligence is based on a AT89C2051 microcontroller which controls its movement. The technical details of the micro robot are as follow: dimension: 30mm*25mm*35*mm; velocity: 60mm/s.

  17. Fusing Laser Reflectance and Image Data for Terrain Classification for Small Autonomous Robots

    DTIC Science & Technology

    2014-12-01

    limit us to low power, lightweight sensors , and a maximum range of approximately 5 meters. Contrast these robot characteristics to typical terrain...classifi- cation work which uses large autonomous ground vehicles with sensors mounted high above the ground. Terrain classification for small autonomous...into predefined classes [10], [11]. However, wheeled vehicles offer the ability to use non-traditional sensors such as vibration sensors [12] and

  18. INL Autonomous Navigation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.

  19. Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2

    DTIC Science & Technology

    2015-03-01

    distribution is unlimited. 13. SUPPLEMENTARY NOTES DCS Corporation, Alexandria, VA 14. ABSTRACT In the past, robot operation has been a high-cognitive...increase performance and reduce perceived workload. The aids were overlays displaying what an autonomous robot perceived in the environment and the...subsequent course of action planned by the robot . Eight active-duty, US Army Soldiers completed 16 scenario missions using an operator interface

  20. A simple, inexpensive, and effective implementation of a vision-guided autonomous robot

    NASA Astrophysics Data System (ADS)

    Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James

    2006-10-01

    This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.

  1. On-rail solution for autonomous inspections in electrical substations

    NASA Astrophysics Data System (ADS)

    Silva, Bruno P. A.; Ferreira, Rafael A. M.; Gomes, Selson C.; Calado, Flavio A. R.; Andrade, Roberto M.; Porto, Matheus P.

    2018-05-01

    This work presents an alternative solution for autonomous inspections in electrical substations. The autonomous system is a robot that moves on rails, collects infrared and visible images of selected targets, also processes the data and predicts the components lifetime. The robot moves on rails to overcome difficulties found in not paved substations commonly encountered in Brazil. We take advantage of using rails to convey the data by them, minimizing the electromagnetic interference, and at the same time transmitting electrical energy to feed the autonomous system. As part of the quality control process, we compared thermographic inspections made by the robot with inspections made by a trained thermographer using a scientific camera Flir® SC660. The results have shown that the robot achieved satisfactory results, identifying components and measuring temperature accurately. The embodied routine considers the weather changes along the day, providing a standard result of the components thermal response, also gives the uncertainty of temperature measurement, contributing to the quality in the decision making process.

  2. Autonomous learning in humanoid robotics through mental imagery.

    PubMed

    Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo

    2013-05-01

    In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. A soft robot capable of 2D mobility and self-sensing for obstacle detection and avoidance

    NASA Astrophysics Data System (ADS)

    Qin, Lei; Tang, Yucheng; Gupta, Ujjaval; Zhu, Jian

    2018-04-01

    Soft robots have shown great potential for surveillance applications due to their interesting attributes including inherent flexibility, extreme adaptability, and excellent ability to move in confined spaces. High mobility combined with the sensing systems that can detect obstacles plays a significant role in performing surveillance tasks. Extensive studies have been conducted on movement mechanisms of traditional hard-bodied robots to increase their mobility. However, there are limited efforts in the literature to explore the mobility of soft robots. In addition, little attempt has been made to study the obstacle-detection capability of a soft mobile robot. In this paper, we develop a soft mobile robot capable of high mobility and self-sensing for obstacle detection and avoidance. This robot, consisting of a dielectric elastomer actuator as the robot body and four electroadhesion actuators as the robot feet, can generate 2D mobility, i.e. translations and turning in a 2D plane, by programming the actuation sequence of the robot body and feet. Furthermore, we develop a self-sensing method which models the robot body as a deformable capacitor. By measuring the real-time capacitance of the robot body, the robot can detect an obstacle when the peak capacitance drops suddenly. This sensing method utilizes the robot body itself instead of external sensors to achieve detection of obstacles, which greatly reduces the weight and complexity of the robot system. The 2D mobility and self-sensing capability ensure the success of obstacle detection and avoidance, which paves the way for the development of lightweight and intelligent soft mobile robots.

  4. Manifold learning in machine vision and robotics

    NASA Astrophysics Data System (ADS)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  5. Coordinated Control Of Mobile Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1995-01-01

    Computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Applicable to variety of mobile robotic manipulators, including robots that move along tracks (typically, painting and welding robots), robots mounted on gantries and capable of moving in all three dimensions, wheeled robots, and compound robots (consisting of robots mounted on other robots). Theoretical basis discussed in several prior articles in NASA Tech Briefs, including "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes With Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).

  6. Autonomous mobile platform with simultaneous localisation and mapping system for patrolling purposes

    NASA Astrophysics Data System (ADS)

    Mitka, Łukasz; Buratowski, Tomasz

    2017-10-01

    This work describes an autonomous mobile platform for supervision and surveillance purposes. The system can be adapted for mounting on different types of vehicles. The platform is based on a SLAM navigation system which performs a localization task. Sensor fusion including laser scanners, inertial measurement unit (IMU), odometry and GPS lets the system determine its position in a certain and precise way. The platform is able to create a 3D model of a supervised area and export it as a point cloud. The system can operate both inside and outside as the navigation algorithm is resistant to typical localization errors caused by wheel slippage or temporal GPS signal loss. The system is equipped with a path-planning module which allows operating in two modes. The first mode is for periodical observation of points in a selected area. The second mode is turned on in case of an alarm. When it is called, the platform moves with the fastest route to the place of the alert. The path planning is always performed online with use of the most current scans, therefore the platform is able to adjust its trajectory to the environment changes or obstacles that are in the motion. The control algorithms are developed under the Robot Operating System (ROS) since it comes with drivers for many devices used in robotics. Such a solution allows for extending the system with any type of sensor in order to incorporate its data into a created area model. Proposed appliance can be ported to other existing robotic platforms or used to develop a new platform dedicated to a specific kind of surveillance. The platform use cases are to patrol an area, such as airport or metro station, in search for dangerous substances or suspicious objects and in case of detection instantly inform security forces. Second use case is a tele-operation in hazardous area for an inspection purposes.

  7. Object Detection Applied to Indoor Environments for Mobile Robot Navigation.

    PubMed

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-07-28

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.

  8. Object Detection Applied to Indoor Environments for Mobile Robot Navigation

    PubMed Central

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-01-01

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264

  9. Ambler - An autonomous rover for planetary exploration

    NASA Technical Reports Server (NTRS)

    Bares, John; Hebert, Martial; Kanade, Takeo; Krotkov, Eric; Mitchell, Tom

    1989-01-01

    The authors are building a prototype legged rover, called the Ambler (loosely an acronym for autonomous mobile exploration robot) and testing it on full-scale, rugged terrain of the sort that might be encountered on the Martian surface. They present an overview of their research program, focusing on locomotion, perception, planning, and control. They summarize some of the most important goals and requirements of a rover design and describe how locomotion, perception, and planning systems can satisfy these requirements. Since the program is relatively young (one year old at the time of writing) they identify issues and approaches and describe work in progress rather than report results. It is expected that many of the technologies developed will be applicable to other planetary bodies and to terrestrial concerns such as hazardous waste assessment and remediation, ocean floor exploration, and mining.

  10. Inexpensive robots used to teach dc circuits and electronics

    NASA Astrophysics Data System (ADS)

    Sidebottom, David L.

    2017-05-01

    This article describes inexpensive, autonomous robots, built without microprocessors, used in a college-level introductory physics laboratory course to motivate student learning of dc circuits. Detailed circuit descriptions are provided as well as a week-by-week course plan that can guide students from elementary dc circuits, through Kirchhoff's laws, and into simple analog integrated circuits with the motivational incentive of building an autonomous robot that can compete with others in a public arena.

  11. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    Sample Return Robot Challenge staff members confer before the team Survey robots makes it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  12. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    A robot from the University of Waterloo Robotics Team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  13. Robot Lies in Health Care: When Is Deception Morally Permissible?

    PubMed

    Matthias, Andreas

    2015-06-01

    Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot's workings, capabilities, and internal structure. The robot's real capabilities may diverge from this mental model to the extent that one might accuse the robot's manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.

  14. A Strategy to employ coordinated, autonomous Platforms for addressing long-term biochemical observing Tasks

    NASA Astrophysics Data System (ADS)

    Waldmann, H. C.; Montenegro, S.

    2016-02-01

    Autonomous platforms get a growing importance for ocean observing tasks in particular to enable long-term observing tasks. Employing the mobility of those platforms allows a targeted investigations of phenomena that up to now are mainly seen from satellite but are lacking detailed scrutiny. As part oft he national funded project ROBEX new operation concepts for mobile platforms are developed in particular a new type of underwater glider with larger payload capacity compared to legacy systems will be developed. First tests in the pool of a aparticular hull shape have led to a better understanding oft he hydrodynamic condition and an optomized hull design was derived from that. The WAVEGLIDER system of Liquid Robotics lends itsself to be used as a communication hub and a platform to track underwater vehicles. Therefore the combination of those systems are currently assessed in regard to a possible operation and its hard- and software implementation. A major issue ist o achieve a coordinated displacement of these completely decoupled systems. Issues on how to mitigate faulty mission runs, coping with low communication bandwidths, and ensuring adequate positioning information about the underwater glider have to be addressed. Robotic concepts known from terrestrial applications like for UAV systems are tested under the more stringent environmental conditions in ocean waters. With this combination of WAVEGLIDER and underwater glider it is planned to carry out long-term missions to investigate biochemical processes in the water column in particular to investigate the particle transport through the water column and the processes resulting from that. Concepts and first results of those tasks will be presented.

  15. Coastal zone environment measurements at Sakhalin Island using autonomous mobile robotic system

    NASA Astrophysics Data System (ADS)

    Tyugin, Dmitry; Kurkin, Andrey; Zaytsev, Andrey; Zeziulin, Denis; Makarov, Vladimir

    2017-04-01

    To perform continuous complex measurements of environment characteristics in coastal zones autonomous mobile robotic system was built. The main advantage of such system in comparison to manual measurements is an ability to quickly change location of the equipment and start measurements. AMRS allows to transport a set of sensors and appropriate power source for long distances. The equipment installed on the AMRS includes: a modern high-tech ship's radar «Micran» for sea waves measurements, multiparameter platform WXT 520 for weather monitoring, high precision GPS/GLONASS receiver OS-203 for georeferencing, laser scanner platform based on two Sick LMS-511 scanners which can provide 3D distance measurements in up to 80 meters on the AMRS route and rugged designed quad-core fanless computer Matrix MXE-5400 for data collecting and recording. The equipment is controlled by high performance modular software developed specially for the AMRS. During the summer 2016 the experiment was conducted. Measurements took place at the coastal zone of Sakhalin Island (Russia). The measuring system of AMRS was started in automatic mode controlled by the software. As result a lot of data was collected and processed to database. It consists of continuous measurements of the coastal zone including different weather conditions. The most interesting for investigation is a period of three-point storm detected on June, 2, 2016. Further work will relate to data processing of measured environment characteristics and numerical models verification based on the collected data. The presented results of research obtained by the support of the Russian president's scholarship for young scientists and graduate students №SP-193.2015.5

  16. Supervised autonomous robotic soft tissue surgery.

    PubMed

    Shademan, Azad; Decker, Ryan S; Opfermann, Justin D; Leonard, Simon; Krieger, Axel; Kim, Peter C W

    2016-05-04

    The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques. Copyright © 2016, American Association for the Advancement of Science.

  17. A survey of simultaneous localization and mapping on unstructured lunar complex environment

    NASA Astrophysics Data System (ADS)

    Wang, Yiqiao; Zhang, Wei; An, Pei

    2017-10-01

    Simultaneous localization and mapping (SLAM) technology is the key to realizing lunar rover's intelligent perception and autonomous navigation. It embodies the autonomous ability of mobile robot, and has attracted plenty of concerns of researchers in the past thirty years. Visual sensors are meaningful to SLAM research because they can provide a wealth of information. Visual SLAM uses merely images as external information to estimate the location of the robot and construct the environment map. Nowadays, SLAM technology still has problems when applied in large-scale, unstructured and complex environment. Based on the latest technology in the field of visual SLAM, this paper investigates and summarizes the SLAM technology using in the unstructured complex environment of lunar surface. In particular, we focus on summarizing and comparing the detection and matching of features of SIFT, SURF and ORB, in the meanwhile discussing their advantages and disadvantages. We have analyzed the three main methods: SLAM Based on Extended Kalman Filter, SLAM Based on Particle Filter and SLAM Based on Graph Optimization (EKF-SLAM, PF-SLAM and Graph-based SLAM). Finally, this article summarizes and discusses the key scientific and technical difficulties in the lunar context that Visual SLAM faces. At the same time, we have explored the frontier issues such as multi-sensor fusion SLAM and multi-robot cooperative SLAM technology. We also predict and prospect the development trend of lunar rover SLAM technology, and put forward some ideas of further research.

  18. Autonomous intelligent military robots: Army ants, killer bees, and cybernetic soldiers

    NASA Astrophysics Data System (ADS)

    Finkelstein, Robert

    The rationale for developing autonomous intelligent robots in the military is to render conventional warfare systems ineffective and indefensible. The Desert Storm operation demonstrated the effectiveness of such systems as unmanned air and ground vehicles and indicated the future possibilities of robotic technology. Robotic military vehicles would have the advantages of expendability, low cost, lower complexity compared to manned systems, survivability, maneuverability, and a capability to share in instantaneous communication and distributed processing of combat information. Basic characteristics of intelligent systems and hierarchical control systems with sensor inputs are described. Genetic algorithms are seen as a means of achieving appropriate levels of intelligence in a robotic system. Potential impacts of robotic technology in the military are outlined.

  19. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    Team KuuKulgur watches as their robots attempt the level one competition during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  20. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The Retrievers team robot is seen as it attempts the level one challenge the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  1. Plugin-docking system for autonomous charging using particle filter

    NASA Astrophysics Data System (ADS)

    Koyasu, Hiroshi; Wada, Masayoshi

    2017-03-01

    Autonomous charging of the robot battery is one of the key functions for the sake of expanding working areas of the robots. To realize it, most of existing systems use custom docking stations or artificial markers. By the other words, they can only charge on a few specific outlets. If the limit can be removed, working areas of the robots significantly expands. In this paper, we describe a plugin-docking system for the autonomous charging, which does not require any custom docking stations or artificial markers. A single camera is used for recognizing the 3D position of an outlet socket. A particle filter-based image tracking algorithm which is robust to the illumination change is applied. The algorithm is implemented on a robot with an omnidirectional moving system. The experimental results show the effectiveness of our system.

  2. A Review of Robotics in Neurorehabilitation: Towards an Automated Process for Upper Limb

    PubMed Central

    Sánchez-Herrera, P.; Balaguer, C.; Jardón, A.

    2018-01-01

    Robot-mediated neurorehabilitation is a growing field that seeks to incorporate advances in robotics combined with neuroscience and rehabilitation to define new methods for treating problems related with neurological diseases. In this paper, a systematic literature review is conducted to identify the contribution of robotics for upper limb neurorehabilitation, highlighting its relation with the rehabilitation cycle, and to clarify the prospective research directions in the development of more autonomous rehabilitation processes. With this aim, first, a study and definition of a general rehabilitation process are made, and then, it is particularized for the case of neurorehabilitation, identifying the components involved in the cycle and their degree of interaction between them. Next, this generic process is compared with the current literature in robotics focused on upper limb treatment, analyzing which components of this rehabilitation cycle are being investigated. Finally, the challenges and opportunities to obtain more autonomous rehabilitation processes are discussed. In addition, based on this study, a series of technical requirements that should be taken into account when designing and implementing autonomous robotic systems for rehabilitation is presented and discussed. PMID:29707189

  3. A Multi-Robot Sense-Act Approach to Lead to a Proper Acting in Environmental Incidents

    PubMed Central

    Conesa-Muñoz, Jesús; Valente, João; del Cerro, Jaime; Barrientos, Antonio; Ribeiro, Angela

    2016-01-01

    Many environmental incidents affect large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial vehicles, may help address this issue due to their suitability to reach and easily cover large areas. Thus, unmanned aerial vehicles may be used to inspect the terrain and make a first assessment of the affected areas; however, nowadays they do not have the capability to act. On the other hand, ground vehicles rely on enough power to perform the intervention but exhibit more mobility constraints. This paper proposes a multi-robot sense-act system, composed of aerial and ground vehicles. This combination allows performing autonomous tasks in large outdoor areas by integrating both types of platforms in a fully automated manner. Aerial units are used to easily obtain relevant data from the environment and ground units use this information to carry out interventions more efficiently. This paper describes the platforms and sensors required by this multi-robot sense-act system as well as proposes a software system to automatically handle the workflow for any generic environmental task. The proposed system has proved to be suitable to reduce the amount of herbicide applied in agricultural treatments. Although herbicides are very polluting, they are massively deployed on complete agricultural fields to remove weeds. Nevertheless, the amount of herbicide required for treatment is radically reduced when it is accurately applied on patches by the proposed multi-robot system. Thus, the aerial units were employed to scout the crop and build an accurate weed distribution map which was subsequently used to plan the task of the ground units. The whole workflow was executed in a fully autonomous way, without human intervention except when required by Spanish law due to safety reasons. PMID:27517934

  4. Development of an Interactive Augmented Environment and Its Application to Autonomous Learning for Quadruped Robots

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi

    This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.

  5. Autonomous Realtime Threat-Hunting Robot (ARTHR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    INL

    2008-05-29

    Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats.

  6. Autonomous Realtime Threat-Hunting Robot (ARTHR

    ScienceCinema

    INL

    2017-12-09

    Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats.

  7. Supervisory autonomous local-remote control system design: Near-term and far-term applications

    NASA Technical Reports Server (NTRS)

    Zimmerman, Wayne; Backes, Paul

    1993-01-01

    The JPL Supervisory Telerobotics Laboratory (STELER) has developed a unique local-remote robot control architecture which enables management of intermittent bus latencies and communication delays such as those expected for ground-remote operation of Space Station robotic systems via the TDRSS communication platform. At the local site, the operator updates the work site world model using stereo video feedback and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. The operator can then employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the object under any degree of time-delay. The remote site performs the closed loop force/torque control, task monitoring, and reflex action. This paper describes the STELER local-remote robot control system, and further describes the near-term planned Space Station applications, along with potential far-term applications such as telescience, autonomous docking, and Lunar/Mars rovers.

  8. An architectural approach to create self organizing control systems for practical autonomous robots

    NASA Technical Reports Server (NTRS)

    Greiner, Helen

    1991-01-01

    For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.

  9. Autonomous bone reposition around anatomical landmark for robot-assisted orthognathic surgery.

    PubMed

    Woo, Sang-Yoon; Lee, Sang-Jeong; Yoo, Ji-Yong; Han, Jung-Joon; Hwang, Soon-Jung; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Yi, Won-Jin

    2017-12-01

    The purpose of this study was to develop a new method for enabling a robot to assist a surgeon in repositioning a bone segment to accurately transfer a preoperative virtual plan into the intraoperative phase in orthognathic surgery. We developed a robot system consisting of an arm with six degrees of freedom, a robot motion-controller, and a PC. An end-effector at the end of the robot arm transferred the movements of the robot arm to the patient's jawbone. The registration between the robot and CT image spaces was performed completely preoperatively, and the intraoperative registration could be finished using only position changes of the tracking tools at the robot end-effector and the patient's splint. The phantom's maxillomandibular complex (MMC) connected to the robot's end-effector was repositioned autonomously by the robot movements around an anatomical landmark of interest based on the tool center point (TCP) principle. The robot repositioned the MMC around the TCP of the incisor of the maxilla and the pogonion of the mandible following plans for real orthognathic patients. The accuracy of the robot's repositioning increased when an anatomical landmark for the TCP was close to the registration fiducials. In spite of this influence, we could increase the repositioning accuracy at the landmark by using the landmark itself as the TCP. With its ability to incorporate virtual planning using a CT image and autonomously execute the plan around an anatomical landmark of interest, the robot could help surgeons reposition bones more accurately and dexterously. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  10. Autonomous planetary rover

    NASA Astrophysics Data System (ADS)

    Krotkov, Eric; Simmons, Reid; Whittaker, William

    1992-02-01

    This report describes progress in research on an autonomous robot for planetary exploration performed during 1991 at the Robotics Institute, Carnegie Mellon University. The report summarizes the achievements during calendar year 1991, and lists personnel and publications. In addition, it includes several papers resulting from the research. Research in 1991 focused on understanding the unique capabilities of the Ambler mechanism and on autonomous walking in rough, natural terrain. We also designed a sample acquisition system, and began to configure a successor to the Ambler.

  11. Automation &robotics for future Mars exploration

    NASA Astrophysics Data System (ADS)

    Schulte, W.; von Richter, A.; Bertrand, R.

    2003-04-01

    Automation and Robotics (A&R) are currently considered as a key technology for Mars exploration. initiatives in this field aim at developing new A&R systems and technologies for planetary surface exploration. Kayser-Threde led the study AROMA (Automation &Robotics for Human Mars Exploration) under ESA contract in order to define a reference architecture of A&R elements in support of a human Mars exploration program. One of the goals was to define new developments and to maintain the competitiveness of European industry within this field. We present a summary of the A&R study in respect to a particular system: The Autonomous Research Island (ARI). In the Mars exploration scenario initially a robotic outpost system lands at pre-selected sites in order to search for life forms and water and to analyze the surface, geology and atmosphere. A&R systems, i.e. rovers and autonomous instrument packages, perform a number of missions with scientific and technology development objectives on the surface of Mars as part of preparations for a human exploration mission. In the Robotic Outpost Phase ARI is conceived as an automated lander which can perform in-situ analysis. It consists of a service module and a micro-rover system for local investigations. Such a system is already under investigation and development in other TRP activities. The micro-rover system provides local mobility for in-situ scientific investigations at a given landing or deployment site. In the long run ARI supports also human Mars missions. An astronaut crew would travel larger distances in a pressurized rover on Mars. Whenever interesting features on the surface are identified, the crew would interrupt the travel and perform local investigations. In order to save crew time ARI could be deployed by the astronauts to perform time-consuming investigations as for example in-situ geochemistry analysis of rocks/soil. Later, the crew could recover the research island for refurbishment and deployment at another site. In the frame of near-term Mars exploration a dedicated exobiology mission is envisaged. Scientific and technical studies for a facility to detect the evidence of past of present life have been carried out under ESA contract. Mars soil/rock samples are to be analyzed for their morphology, organic and inorganic composition using a suite of scientific instruments. Robotic devices, e.g. for the acquisition, handling and onboard processing of Mars sample material retrieved from different locations, and surface mobility are important elements in a fully automated mission. Necessary robotic elements have been identified in past studies. Their realization can partly be based on heritage of existing space hardware, but will require dedicated development effort.

  12. Toward Autonomous Multi-floor Exploration: Ascending Stairway Localization and Modeling

    DTIC Science & Technology

    2013-03-01

    robots have traditionally been restricted to single floors of a building or outdoor areas free of abrupt elevation changes such as curbs and stairs ...solution to this problem and is motivated by the rich potential of an autonomous ground robot that can climb stairs while exploring a multi-floor...parameters of the stairways, the robot could plan a path that traverses the stairs in order to explore the frontier at other elevations that were previously

  13. Automation and robotics technology for intelligent mining systems

    NASA Technical Reports Server (NTRS)

    Welsh, Jeffrey H.

    1989-01-01

    The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.

  14. Science, technology and the future of small autonomous drones.

    PubMed

    Floreano, Dario; Wood, Robert J

    2015-05-28

    We are witnessing the advent of a new era of robots - drones - that can autonomously fly in natural and man-made environments. These robots, often associated with defence applications, could have a major impact on civilian tasks, including transportation, communication, agriculture, disaster mitigation and environment preservation. Autonomous flight in confined spaces presents great scientific and technical challenges owing to the energetic cost of staying airborne and to the perceptual intelligence required to negotiate complex environments. We identify scientific and technological advances that are expected to translate, within appropriate regulatory frameworks, into pervasive use of autonomous drones for civilian applications.

  15. Sensing and data classification for a robotic meteorite search

    NASA Astrophysics Data System (ADS)

    Pedersen, Liam; Apostolopoulos, Dimi; Whittaker, William L.; Benedix, Gretchen; Rousch, Ted

    1999-01-01

    Upcoming missions to Mars and the mon call for highly autonomous robots with capability to perform intra-site exploration, reason about their scientific finds, and perform comprehensive on-board analysis of data collected. An ideal case for testing such technologies and robot capabilities is the robotic search for Antarctic meteorites. The successful identification and classification of meteorites depends on sensing modalities and intelligent evaluation of acquired data. Data from color imagery and spectroscopic measurements are used to identify terrestrial rocks and distinguish them from meteorites. However, because of the large number of rocks and the high cost and delay of using some of the sensors, it is necessary to eliminate as many meteorite candidates as possible using cheap long range sensors, such as color cameras. More resource consuming sensor will be held in reserve for the more promising samples only. Bayes networks are used as the formalism for incrementally combing data from multiple sources in a statistically rigorous manner. Furthermore, they can be used to infer the utility of further sensor readings given currently known data. This information, along with cost estimates, in necessary for the sensing system to rationally schedule further sensor reading sand deployments. This paper address issues associated with sensor selection and implementation of an architecture for automatic identification of rocks and meteorites from a mobile robot.

  16. Development of a Commercially Viable, Modular Autonomous Robotic Systems for Converting any Vehicle to Autonomous Control

    NASA Technical Reports Server (NTRS)

    Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.

    1994-01-01

    A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.

  17. A Behavior-Based Strategy for Single and Multi-Robot Autonomous Exploration

    PubMed Central

    Cepeda, Jesus S.; Chaimowicz, Luiz; Soto, Rogelio; Gordillo, José L.; Alanís-Reyes, Edén A.; Carrillo-Arce, Luis C.

    2012-01-01

    In this paper, we consider the problem of autonomous exploration of unknown environments with single and multiple robots. This is a challenging task, with several potential applications. We propose a simple yet effective approach that combines a behavior-based navigation with an efficient data structure to store previously visited regions. This allows robots to safely navigate, disperse and efficiently explore the environment. A series of experiments performed using a realistic robotic simulator and a real testbed scenario demonstrate that our technique effectively distributes the robots over the environment and allows them to quickly accomplish their mission in large open spaces, narrow cluttered environments, dead-end corridors, as well as rooms with minimum exits.

  18. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  19. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  20. Autonomous Evolution of Dynamic Gaits with Two Quadruped Robots

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Takamura, Seichi; Yamamoto, Takashi; Fujita, Masahiro

    2004-01-01

    A challenging task that must be accomplished for every legged robot is creating the walking and running behaviors needed for it to move. In this paper we describe our system for autonomously evolving dynamic gaits on two of Sony's quadruped robots. Our evolutionary algorithm runs on board the robot and uses the robot's sensors to compute the quality of a gait without assistance from the experimenter. First we show the evolution of a pace and trot gait on the OPEN-R prototype robot. With the fastest gait, the robot moves at over 10/min/min., which is more than forty body-lengths/min. While these first gaits are somewhat sensitive to the robot and environment in which they are evolved, we then show the evolution of robust dynamic gaits, one of which is used on the ERS-110, the first consumer version of AIBO.

  1. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The University of Waterloo Robotics Team, from Canada, prepares to place their robot on the start platform during the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  2. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-10

    The University of Waterloo Robotics Team, from Ontario, Canada, prepares their robot for the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The team from the University of Waterloo is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  3. Dissociated emergent-response system and fine-processing system in human neural network and a heuristic neural architecture for autonomous humanoid robots.

    PubMed

    Yan, Xiaodan

    2010-01-01

    The current study investigated the functional connectivity of the primary sensory system with resting state fMRI and applied such knowledge into the design of the neural architecture of autonomous humanoid robots. Correlation and Granger causality analyses were utilized to reveal the functional connectivity patterns. Dissociation was within the primary sensory system, in that the olfactory cortex and the somatosensory cortex were strongly connected to the amygdala whereas the visual cortex and the auditory cortex were strongly connected with the frontal cortex. The posterior cingulate cortex (PCC) and the anterior cingulate cortex (ACC) were found to maintain constant communication with the primary sensory system, the frontal cortex, and the amygdala. Such neural architecture inspired the design of dissociated emergent-response system and fine-processing system in autonomous humanoid robots, with separate processing units and another consolidation center to coordinate the two systems. Such design can help autonomous robots to detect and respond quickly to danger, so as to maintain their sustainability and independence.

  4. Learning tactile skills through curious exploration

    PubMed Central

    Pape, Leo; Oddo, Calogero M.; Controzzi, Marco; Cipriani, Christian; Förster, Alexander; Carrozza, Maria C.; Schmidhuber, Jürgen

    2012-01-01

    We present curiosity-driven, autonomous acquisition of tactile exploratory skills on a biomimetic robot finger equipped with an array of microelectromechanical touch sensors. Instead of building tailored algorithms for solving a specific tactile task, we employ a more general curiosity-driven reinforcement learning approach that autonomously learns a set of motor skills in absence of an explicit teacher signal. In this approach, the acquisition of skills is driven by the information content of the sensory input signals relative to a learner that aims at representing sensory inputs using fewer and fewer computational resources. We show that, from initially random exploration of its environment, the robotic system autonomously develops a small set of basic motor skills that lead to different kinds of tactile input. Next, the system learns how to exploit the learned motor skills to solve supervised texture classification tasks. Our approach demonstrates the feasibility of autonomous acquisition of tactile skills on physical robotic platforms through curiosity-driven reinforcement learning, overcomes typical difficulties of engineered solutions for active tactile exploration and underactuated control, and provides a basis for studying developmental learning through intrinsic motivation in robots. PMID:22837748

  5. Where neuroscience and dynamic system theory meet autonomous robotics: a contracting basal ganglia model for action selection.

    PubMed

    Girard, B; Tabareau, N; Pham, Q C; Berthoz, A; Slotine, J-J

    2008-05-01

    Action selection, the problem of choosing what to do next, is central to any autonomous agent architecture. We use here a multi-disciplinary approach at the convergence of neuroscience, dynamical system theory and autonomous robotics, in order to propose an efficient action selection mechanism based on a new model of the basal ganglia. We first describe new developments of contraction theory regarding locally projected dynamical systems. We exploit these results to design a stable computational model of the cortico-baso-thalamo-cortical loops. Based on recent anatomical data, we include usually neglected neural projections, which participate in performing accurate selection. Finally, the efficiency of this model as an autonomous robot action selection mechanism is assessed in a standard survival task. The model exhibits valuable dithering avoidance and energy-saving properties, when compared with a simple if-then-else decision rule.

  6. Design of an autonomous exterior security robot

    NASA Technical Reports Server (NTRS)

    Myers, Scott D.

    1994-01-01

    This paper discusses the requirements and preliminary design of robotic vehicle designed for performing autonomous exterior perimeter security patrols around warehouse areas, ammunition supply depots, and industrial parks for the U.S. Department of Defense. The preliminary design allows for the operation of up to eight vehicles in a six kilometer by six kilometer zone with autonomous navigation and obstacle avoidance. In addition to detection of crawling intruders at 100 meters, the system must perform real-time inventory checking and database comparisons using a microwave tags system.

  7. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-10

    A team KuuKulgur Robot from Estonia is seen on the practice field during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team KuuKulgur is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  8. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    Sam Ortega, NASA program manager of Centennial Challenges, watches as robots attempt the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  9. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    The team Survey robot retrieves a sample during a demonstration of the level two challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  10. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The team AERO robot drives off the starting platform during the level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  11. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    Team Cephal's robot is seen on the starting platform during a rerun of the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  12. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The Oregon State University Mars Rover Team's robot is seen during level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  13. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-10

    Jerry Waechter of team Middleman from Dunedin, Florida, works on their robot named Ro-Bear during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Middleman is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  14. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    A robot from the Intrepid Systems team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  15. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    A team KuuKulgur robot is seen as it begins the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  16. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The team Mountaineers robot is seen as it attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  17. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    Members of the Oregon State University Mars Rover Team prepare their robot to attempt the level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  18. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The Stellar Automation Systems team poses for a picture with their robot after attempting the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  19. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    The team Survey robot is seen as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  20. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    All four of team KuuKulgur's robots are seen as they attempt the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

Top