Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue
NASA Technical Reports Server (NTRS)
Zornetzer, Steve; Gage, Douglas
2005-01-01
Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.
Control of autonomous robot using neural networks
NASA Astrophysics Data System (ADS)
Barton, Adam; Volna, Eva
2017-07-01
The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.
Doroodgar, Barzin; Liu, Yugang; Nejat, Goldie
2014-12-01
Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing a human operator to cooperate and share such tasks with a rescue robot as navigation, exploration, and victim identification. In this paper, we present a unique hierarchical reinforcement learning-based semi-autonomous control architecture for rescue robots operating in cluttered and unknown urban search and rescue (USAR) environments. The aim of the controller is to enable a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A direction-based exploration technique is integrated in the controller to expand the search area of the robot via the classification of regions and the rubble piles within these regions. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed HRL-based semi-autonomous controller to unknown cluttered scenes with different sizes and varying types of configurations.
Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling Autonomous Robots
2009-09-01
To Appear in IEEE Robotics and Automation Magazine PREPRINT 1 Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling...Introduction We present a strategy for controlling autonomous robots that is based on principles of neuromodulation in the mammalian brain...object, ignore irrelevant distractions, and respond quickly and appropriately to the event [1]. There are separate neuromodulators that alter responses to
Mamdani Fuzzy System for Indoor Autonomous Mobile Robot
NASA Astrophysics Data System (ADS)
Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.
2011-06-01
Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.
Autonomous robot software development using simple software components
NASA Astrophysics Data System (ADS)
Burke, Thomas M.; Chung, Chan-Jin
2004-10-01
Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.
Feasibility of Synergy-Based Exoskeleton Robot Control in Hemiplegia.
Hassan, Modar; Kadone, Hideki; Ueno, Tomoyuki; Hada, Yasushi; Sankai, Yoshiyuki; Suzuki, Kenji
2018-06-01
Here, we present a study on exoskeleton robot control based on inter-limb locomotor synergies using a robot control method developed to target hemiparesis. The robot control is based on inter-limb locomotor synergies and kinesiological information from the non-paretic leg and a walking aid cane to generate motion patterns for the assisted leg. The developed synergy-based system was tested against an autonomous robot control system in five patients with hemiparesis and varying locomotor abilities. Three of the participants were able to walk using the robot. Results from these participants showed an improved spatial symmetry ratio and more consistent step length with the synergy-based method compared with that for the autonomous method, while the increase in the range of motion for the assisted joints was larger with the autonomous system. The kinematic synergy distribution of the participants walking without the robot suggests a relationship between each participant's synergy distribution and his/her ability to control the robot: participants with two independent synergies accounting for approximately 80% of the data variability were able to walk with the robot. This observation was not consistently apparent with conventional clinical measures such as the Brunnstrom stages. This paper contributes to the field of robot-assisted locomotion therapy by introducing the concept of inter-limb synergies, demonstrating performance differences between synergy-based and autonomous robot control, and investigating the range of disability in which the system is usable.
Development of autonomous eating mechanism for biomimetic robots
NASA Astrophysics Data System (ADS)
Jeong, Kil-Woong; Cho, Ik-Jin; Lee, Yun-Jung
2005-12-01
Most of the recently developed robots are human friendly robots which imitate animals or humans such as entertainment robot, bio-mimetic robot and humanoid robot. Interest for these robots are being increased because the social trend is focused on health, welfare, and graying. Autonomous eating functionality is most unique and inherent behavior of pets and animals. Most of entertainment robots and pet robots make use of internal-type battery. Entertainment robots and pet robots with internal-type battery are not able to operate during charging the battery. Therefore, if a robot has an autonomous function for eating battery as its feeds, the robot is not only able to operate during recharging energy but also become more human friendly like pets. Here, a new autonomous eating mechanism was introduced for a biomimetic robot, called ELIRO-II(Eating LIzard RObot version 2). The ELIRO-II is able to find a food (a small battery), eat and evacuate by itself. This work describe sub-parts of the developed mechanism such as head-part, mouth-part, and stomach-part. In addition, control system of autonomous eating mechanism is described.
Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques
NASA Astrophysics Data System (ADS)
Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.
1999-08-01
A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1990-01-01
A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.
Tracked robot controllers for climbing obstacles autonomously
NASA Astrophysics Data System (ADS)
Vincent, Isabelle
2009-05-01
Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.
Vision Based Autonomous Robotic Control for Advanced Inspection and Repair
NASA Technical Reports Server (NTRS)
Wehner, Walter S.
2014-01-01
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
A small, cheap, and portable reconnaissance robot
NASA Astrophysics Data System (ADS)
Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey
2005-05-01
While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.
Fully decentralized control of a soft-bodied robot inspired by true slime mold.
Umedachi, Takuya; Takeda, Koichi; Nakagaki, Toshiyuki; Kobayashi, Ryo; Ishiguro, Akio
2010-03-01
Animals exhibit astoundingly adaptive and supple locomotion under real world constraints. In order to endow robots with similar capabilities, we must implement many degrees of freedom, equivalent to animals, into the robots' bodies. For taming many degrees of freedom, the concept of autonomous decentralized control plays a pivotal role. However a systematic way of designing such autonomous decentralized control system is still missing. Aiming at understanding the principles that underlie animals' locomotion, we have focused on a true slime mold, a primitive living organism, and extracted a design scheme for autonomous decentralized control system. In order to validate this design scheme, this article presents a soft-bodied amoeboid robot inspired by the true slime mold. Significant features of this robot are twofold: (1) the robot has a truly soft and deformable body stemming from real-time tunable springs and protoplasm, the former is used for an outer skin of the body and the latter is to satisfy the law of conservation of mass; and (2) fully decentralized control using coupled oscillators with completely local sensory feedback mechanism is realized by exploiting the long-distance physical interaction between the body parts stemming from the law of conservation of protoplasmic mass. Simulation results show that this robot exhibits highly supple and adaptive locomotion without relying on any hierarchical structure. The results obtained are expected to shed new light on design methodology for autonomous decentralized control system.
Supervisory autonomous local-remote control system design: Near-term and far-term applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul
1993-01-01
The JPL Supervisory Telerobotics Laboratory (STELER) has developed a unique local-remote robot control architecture which enables management of intermittent bus latencies and communication delays such as those expected for ground-remote operation of Space Station robotic systems via the TDRSS communication platform. At the local site, the operator updates the work site world model using stereo video feedback and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. The operator can then employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the object under any degree of time-delay. The remote site performs the closed loop force/torque control, task monitoring, and reflex action. This paper describes the STELER local-remote robot control system, and further describes the near-term planned Space Station applications, along with potential far-term applications such as telescience, autonomous docking, and Lunar/Mars rovers.
An autonomous satellite architecture integrating deliberative reasoning and behavioural intelligence
NASA Technical Reports Server (NTRS)
Lindley, Craig A.
1993-01-01
This paper describes a method for the design of autonomous spacecraft, based upon behavioral approaches to intelligent robotics. First, a number of previous spacecraft automation projects are reviewed. A methodology for the design of autonomous spacecraft is then presented, drawing upon both the European Space Agency technological center (ESTEC) automation and robotics methodology and the subsumption architecture for autonomous robots. A layered competency model for autonomous orbital spacecraft is proposed. A simple example of low level competencies and their interaction is presented in order to illustrate the methodology. Finally, the general principles adopted for the control hardware design of the AUSTRALIS-1 spacecraft are described. This system will provide an orbital experimental platform for spacecraft autonomy studies, supporting the exploration of different logical control models, different computational metaphors within the behavioral control framework, and different mappings from the logical control model to its physical implementation.
Framework and Method for Controlling a Robotic System Using a Distributed Computer Network
NASA Technical Reports Server (NTRS)
Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)
2015-01-01
A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.
NASA Technical Reports Server (NTRS)
Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.
1994-01-01
A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.
Speed control for a mobile robot
NASA Astrophysics Data System (ADS)
Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.
Autonomous learning in humanoid robotics through mental imagery.
Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo
2013-05-01
In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.
SLAM algorithm applied to robotics assistance for navigation in unknown environments.
Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo
2010-02-17
The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.
Technology transfer: Imaging tracker to robotic controller
NASA Technical Reports Server (NTRS)
Otaguro, M. S.; Kesler, L. O.; Land, Ken; Erwin, Harry; Rhoades, Don
1988-01-01
The transformation of an imaging tracker to a robotic controller is described. A multimode tracker was developed for fire and forget missile systems. The tracker locks on to target images within an acquisition window using multiple image tracking algorithms to provide guidance commands to missile control systems. This basic tracker technology is used with the addition of a ranging algorithm based on sizing a cooperative target to perform autonomous guidance and control of a platform for an Advanced Development Project on automation and robotics. A ranging tracker is required to provide the positioning necessary for robotic control. A simple functional demonstration of the feasibility of this approach was performed and described. More realistic demonstrations are under way at NASA-JSC. In particular, this modified tracker, or robotic controller, will be used to autonomously guide the Man Maneuvering Unit (MMU) to targets such as disabled astronauts or tools as part of the EVA Retriever efforts. It will also be used to control the orbiter's Remote Manipulator Systems (RMS) in autonomous approach and positioning demonstrations. These efforts will also be discussed.
Teleautonomous guidance for mobile robots
NASA Technical Reports Server (NTRS)
Borenstein, J.; Koren, Y.
1990-01-01
Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.
Equipment Proposal for the Autonomous Vehicle Systems Laboratory at UIW
2015-04-29
testing, 5) 38 Lego Mindstorm EV3 and Hitechnic Sensors for use in feedback control and autonomous systems for STEM undergraduate and High School...autonomous robots using the Lego Mindstorm EV3. This robotics workshop will be used as a pilot study for next summer when more High School students
Crew/Robot Coordinated Planetary EVA Operations at a Lunar Base Analog Site
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Bluethmann, W. J.; Delgado, F. J.; Herrera, E.; Kosmo, J. J.; Janoiko, B. A.; Wilcox, B. H.; Townsend, J. A.; Matthews, J. B.;
2007-01-01
Under the direction of NASA's Exploration Technology Development Program, robots and space suited subjects from several NASA centers recently completed a very successful demonstration of coordinated activities indicative of base camp operations on the lunar surface. For these activities, NASA chose a site near Meteor Crater, Arizona close to where Apollo Astronauts previously trained. The main scenario demonstrated crew returning from a planetary EVA (extra-vehicular activity) to a temporary base camp and entering a pressurized rover compartment while robots performed tasks in preparation for the next EVA. Scenario tasks included: rover operations under direct human control and autonomous modes, crew ingress and egress activities, autonomous robotic payload removal and stowage operations under both local control and remote control from Houston, and autonomous robotic navigation and inspection. In addition to the main scenario, participants had an opportunity to explore additional robotic operations: hill climbing, maneuvering heaving loads, gathering geo-logical samples, drilling, and tether operations. In this analog environment, the suited subjects and robots experienced high levels of dust, rough terrain, and harsh lighting.
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Editor)
1990-01-01
Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.
On-Line Point Positioning with Single Frame Camera Data
1992-03-15
tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile
Rice-obot 1: An intelligent autonomous mobile robot
NASA Technical Reports Server (NTRS)
Defigueiredo, R.; Ciscon, L.; Berberian, D.
1989-01-01
The Rice-obot I is the first in a series of Intelligent Autonomous Mobile Robots (IAMRs) being developed at Rice University's Cooperative Intelligent Mobile Robots (CIMR) lab. The Rice-obot I is mainly designed to be a testbed for various robotic and AI techniques, and a platform for developing intelligent control systems for exploratory robots. Researchers present the need for a generalized environment capable of combining all of the control, sensory and knowledge systems of an IAMR. They introduce Lisp-Nodes as such a system, and develop the basic concepts of nodes, messages and classes. Furthermore, they show how the control system of the Rice-obot I is implemented as sub-systems in Lisp-Nodes.
Automatic tracking of laparoscopic instruments for autonomous control of a cameraman robot.
Khoiy, Keyvan Amini; Mirbagheri, Alireza; Farahmand, Farzam
2016-01-01
An automated instrument tracking procedure was designed and developed for autonomous control of a cameraman robot during laparoscopic surgery. The procedure was based on an innovative marker-free segmentation algorithm for detecting the tip of the surgical instruments in laparoscopic images. A compound measure of Saturation and Value components of HSV color space was incorporated that was enhanced further using the Hue component and some essential characteristics of the instrument segment, e.g., crossing the image boundaries. The procedure was then integrated into the controlling system of the RoboLens cameraman robot, within a triple-thread parallel processing scheme, such that the tip is always kept at the center of the image. Assessment of the performance of the system on prerecorded real surgery movies revealed an accuracy rate of 97% for high quality images and about 80% for those suffering from poor lighting and/or blood, water and smoke noises. A reasonably satisfying performance was also observed when employing the system for autonomous control of the robot in a laparoscopic surgery phantom, with a mean time delay of 200ms. It was concluded that with further developments, the proposed procedure can provide a practical solution for autonomous control of cameraman robots during laparoscopic surgery operations.
Imitative Robotic Control: The Puppet Master
2014-07-09
puppet style control device and the lessons learned while implementing such a device. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17...mission to be completed in a quick, accurate and efficient manner. This paper outlines the potential features of a puppet style control device and the...lessons learned while implementing such a device. INTRODUCTION As ground robotics moves towards autonomous and semi- autonomous operations, the
Open Issues in Evolutionary Robotics.
Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.
An integrated design and fabrication strategy for entirely soft, autonomous robots.
Wehner, Michael; Truby, Ryan L; Fitzgerald, Daniel J; Mosadegh, Bobak; Whitesides, George M; Lewis, Jennifer A; Wood, Robert J
2016-08-25
Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials. Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources. New strategies for creating completely soft robots, including soft analogues of these crucial components, are needed to realize their full potential. Here we report the untethered operation of a robot composed solely of soft materials. The robot is controlled with microfluidic logic that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply. Gas generated from the fuel decomposition inflates fluidic networks downstream of the reaction sites, resulting in actuation. The body and microfluidic logic of the robot are fabricated using moulding and soft lithography, respectively, and the pneumatic actuator networks, on-board fuel reservoirs and catalytic reaction chambers needed for movement are patterned within the body via a multi-material, embedded 3D printing technique. The fluidic and elastomeric architectures required for function span several orders of magnitude from the microscale to the macroscale. Our integrated design and rapid fabrication approach enables the programmable assembly of multiple materials within this architecture, laying the foundation for completely soft, autonomous robots.
SLAM algorithm applied to robotics assistance for navigation in unknown environments
2010-01-01
Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735
Autonomous stair-climbing with miniature jumping robots.
Stoeter, Sascha A; Papanikolopoulos, Nikolaos
2005-04-01
The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.
RoMPS concept review automatic control of space robot, volume 2
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1991-01-01
Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form and include: (1) system concept; (2) Hitchhiker Interface Requirements; (3) robot axis control concepts; (4) Autonomous Experiment Management System; (5) Zymate Robot Controller; (6) Southwest SC-4 Computer; (7) oven control housekeeping data; and (8) power distribution.
Distance-Based Behaviors for Low-Complexity Control in Multiagent Robotics
NASA Astrophysics Data System (ADS)
Pierpaoli, Pietro
Several biological examples show that living organisms cooperate to collectively accomplish tasks impossible for single individuals. More importantly, this coordination is often achieved with a very limited set of information. Inspired by these observations, research on autonomous systems has focused on the development of distributed control techniques for control and guidance of groups of autonomous mobile agents, or robots. From an engineering perspective, when coordination and cooperation is sought in large ensembles of robotic vehicles, a reduction in hardware and algorithms' complexity becomes mandatory from the very early stages of the project design. The research for solutions capable of lowering power consumption, cost and increasing reliability are thus worth investigating. In this work, we studied low-complexity techniques to achieve cohesion and control on swarms of autonomous robots. Starting from an inspiring example with two-agents, we introduced effects of neighbors' relative positions on control of an autonomous agent. The extension of this intuition addressed the control of large ensembles of autonomous vehicles, and was applied in the form of a herding-like technique. To this end, a low-complexity distance-based aggregation protocol was defined. We first showed that our protocol produced a cohesion aggregation among the agent while avoiding inter-agent collisions. Then, a feedback leader-follower architecture was introduced for the control of the swarm. We also described how proximity measures and probability of collisions with neighbors can also be used as source of information in highly populated environments.
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.
Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning
2018-03-16
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.
A New Simulation Framework for Autonomy in Robotic Missions
NASA Technical Reports Server (NTRS)
Flueckiger, Lorenzo; Neukom, Christian
2003-01-01
Autonomy is a key factor in remote robotic exploration and there is significant activity addressing the application of autonomy to remote robots. It has become increasingly important to have simulation tools available to test the autonomy algorithms. While indus1;rial robotics benefits from a variety of high quality simulation tools, researchers developing autonomous software are still dependent primarily on block-world simulations. The Mission Simulation Facility I(MSF) project addresses this shortcoming with a simulation toolkit that will enable developers of autonomous control systems to test their system s performance against a set of integrated, standardized simulations of NASA mission scenarios. MSF provides a distributed architecture that connects the autonomous system to a set of simulated components replacing the robot hardware and its environment.
Motor-response learning at a process control panel by an autonomous robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; de Saussure, G.; Lyness, E.
1988-01-01
The Center for Engineering Systems Advanced Research (CESAR) was founded at Oak Ridge National Laboratory (ORNL) by the Department of Energy's Office of Energy Research/Division of Engineering and Geoscience (DOE-OER/DEG) to conduct basic research in the area of intelligent machines. Therefore, researchers at the CESAR Laboratory are engaged in a variety of research activities in the field of machine learning. In this paper, we describe our approach to a class of machine learning which involves motor response acquisition using feedback from trial-and-error learning. Our formulation is being experimentally validated using an autonomous robot, learning tasks of control panel monitoring andmore » manipulation for effect process control. The CLIPS Expert System and the associated knowledge base used by the robot in the learning process, which reside in a hypercube computer aboard the robot, are described in detail. Benchmark testing of the learning process on a robot/control panel simulation system consisting of two intercommunicating computers is presented, along with results of sample problems used to train and test the expert system. These data illustrate machine learning and the resulting performance improvement in the robot for problems similar to, but not identical with, those on which the robot was trained. Conclusions are drawn concerning the learning problems, and implications for future work on machine learning for autonomous robots are discussed. 16 refs., 4 figs., 1 tab.« less
NASA Technical Reports Server (NTRS)
Sandy, Michael
2015-01-01
The Regolith Advanced Surface Systems Operations Robot (RASSOR) Phase 2 is an excavation robot for mining regolith on a planet like Mars. The robot is programmed using the Robotic Operating System (ROS) and it also uses a physical simulation program called Gazebo. This internship focused on various functions of the program in order to make it a more professional and efficient robot. During the internship another project called the Smart Autonomous Sand-Swimming Excavator was worked on. This is a robot that is designed to dig through sand and extract sample material. The intern worked on programming the Sand-Swimming robot, and designing the electrical system to power and control the robot.
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team KuuKulgur waits to begin the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Adaptive Control for Autonomous Navigation of Mobile Robots Considering Time Delay and Uncertainty
NASA Astrophysics Data System (ADS)
Armah, Stephen Kofi
Autonomous control of mobile robots has attracted considerable attention of researchers in the areas of robotics and autonomous systems during the past decades. One of the goals in the field of mobile robotics is development of platforms that robustly operate in given, partially unknown, or unpredictable environments and offer desired services to humans. Autonomous mobile robots need to be equipped with effective, robust and/or adaptive, navigation control systems. In spite of enormous reported work on autonomous navigation control systems for mobile robots, achieving the goal above is still an open problem. Robustness and reliability of the controlled system can always be improved. The fundamental issues affecting the stability of the control systems include the undesired nonlinear effects introduced by actuator saturation, time delay in the controlled system, and uncertainty in the model. This research work develops robustly stabilizing control systems by investigating and addressing such nonlinear effects through analytical, simulations, and experiments. The control systems are designed to meet specified transient and steady-state specifications. The systems used for this research are ground (Dr Robot X80SV) and aerial (Parrot AR.Drone 2.0) mobile robots. Firstly, an effective autonomous navigation control system is developed for X80SV using logic control by combining 'go-to-goal', 'avoid-obstacle', and 'follow-wall' controllers. A MATLAB robot simulator is developed to implement this control algorithm and experiments are conducted in a typical office environment. The next stage of the research develops an autonomous position (x, y, and z) and attitude (roll, pitch, and yaw) controllers for a quadrotor, and PD-feedback control is used to achieve stabilization. The quadrotor's nonlinear dynamics and kinematics are implemented using MATLAB S-function to generate the state output. Secondly, the white-box and black-box approaches are used to obtain a linearized second-order altitude models for the quadrotor, AR.Drone 2.0. Proportional (P), pole placement or proportional plus velocity (PV), linear quadratic regulator (LQR), and model reference adaptive control (MRAC) controllers are designed and validated through simulations using MATLAB/Simulink. Control input saturation and time delay in the controlled systems are also studied. MATLAB graphical user interface (GUI) and Simulink programs are developed to implement the controllers on the drone. Thirdly, the time delay in the drone's control system is estimated using analytical and experimental methods. In the experimental approach, the transient properties of the experimental altitude responses are compared to those of simulated responses. The analytical approach makes use of the Lambert W function to obtain analytical solutions of scalar first-order delay differential equations (DDEs). A time-delayed P-feedback control system (retarded type) is used in estimating the time delay. Then an improved system performance is obtained by incorporating the estimated time delay in the design of the PV control system (neutral type) and PV-MRAC control system. Furthermore, the stability of a parametric perturbed linear time-invariant (LTI) retarded-type system is studied. This is done by analytically calculating the stability radius of the system. Simulation of the control system is conducted to confirm the stability. This robust control design and uncertainty analysis are conducted for first-order and second-order quadrotor models. Lastly, the robustly designed PV and PV-MRAC control systems are used to autonomously track multiple waypoints. Also, the robustness of the PV-MRAC controller is tested against a baseline PV controller using the payload capability of the drone. It is shown that the PV-MRAC offers several benefits over the fixed-gain approach of the PV controller. The adaptive control is found to offer enhanced robustness to the payload fluctuations.
Manifold traversing as a model for learning control of autonomous robots
NASA Technical Reports Server (NTRS)
Szakaly, Zoltan F.; Schenker, Paul S.
1992-01-01
This paper describes a recipe for the construction of control systems that support complex machines such as multi-limbed/multi-fingered robots. The robot has to execute a task under varying environmental conditions and it has to react reasonably when previously unknown conditions are encountered. Its behavior should be learned and/or trained as opposed to being programmed. The paper describes one possible method for organizing the data that the robot has learned by various means. This framework can accept useful operator input even if it does not fully specify what to do, and can combine knowledge from autonomous, operator assisted and programmed experiences.
A simple, inexpensive, and effective implementation of a vision-guided autonomous robot
NASA Astrophysics Data System (ADS)
Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James
2006-10-01
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.
Welding torch trajectory generation for hull joining using autonomous welding mobile robot
NASA Astrophysics Data System (ADS)
Hascoet, J. Y.; Hamilton, K.; Carabin, G.; Rauch, M.; Alonso, M.; Ares, E.
2012-04-01
Shipbuilding processes involve highly dangerous manual welding operations. Welding of ship hulls presents a hazardous environment for workers. This paper describes a new robotic system, developed by the SHIPWELD consortium, that moves autonomously on the hull and automatically executes the required welding processes. Specific focus is placed on the trajectory control of such a system and forms the basis for the discussion in this paper. It includes a description of the robotic hardware design as well as some methodology used to establish the torch trajectory control.
Gaussian Processes for Data-Efficient Learning in Robotics and Control.
Deisenroth, Marc Peter; Fox, Dieter; Rasmussen, Carl Edward
2015-02-01
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
Multidisciplinary unmanned technology teammate (MUTT)
NASA Astrophysics Data System (ADS)
Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark
2013-01-01
The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.
Bourbakis, N G
1997-01-01
This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.
An architectural approach to create self organizing control systems for practical autonomous robots
NASA Technical Reports Server (NTRS)
Greiner, Helen
1991-01-01
For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harber, K.S.; Pin, F.G.
1990-03-01
The US DOE Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) and the Commissariat a l'Energie Atomique's (CEA) Office de Robotique et Productique within the Directorat a la Valorization are working toward a long-term cooperative agreement and relationship in the area of Intelligent Systems Research (ISR). This report presents the proceedings of the first CESAR/CEA Workshop on Autonomous Mobile Robots which took place at ORNL on May 30, 31 and June 1, 1989. The purpose of the workshop was to present and discuss methodologies and algorithms under development at the two facilities in themore » area of perception and navigation for autonomous mobile robots in unstructured environments. Experimental demonstration of the algorithms and comparison of some of their features were proposed to take place within the framework of a previously mutually agreed-upon demonstration scenario or base-case.'' The base-case scenario described in detail in Appendix A, involved autonomous navigation by the robot in an a priori unknown environment with dynamic obstacles, in order to reach a predetermined goal. From the intermediate goal location, the robot had to search for and locate a control panel, move toward it, and dock in front of the panel face. The CESAR demonstration was successfully accomplished using the HERMIES-IIB robot while subsets of the CEA demonstration performed using the ARES robot simulation and animation system were presented. The first session of the workshop focused on these experimental demonstrations and on the needs and considerations for establishing benchmarks'' for testing autonomous robot control algorithms.« less
Manipulator control and mechanization: A telerobot subsystem
NASA Technical Reports Server (NTRS)
Hayati, S.; Wilcox, B.
1987-01-01
The short- and long-term autonomous robot control activities in the Robotics and Teleoperators Research Group at the Jet Propulsion Laboratory (JPL) are described. This group is one of several involved in robotics and is an integral part of a new NASA robotics initiative called Telerobot program. A description of the architecture, hardware and software, and the research direction in manipulator control is given.
Robotic reactions: delay-induced patterns in autonomous vehicle systems.
Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco
2010-02-01
Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.
Robotic reactions: Delay-induced patterns in autonomous vehicle systems
NASA Astrophysics Data System (ADS)
Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco
2010-02-01
Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.
Robotics development for the enhancement of space endeavors
NASA Astrophysics Data System (ADS)
Mauceri, A. J.; Clarke, Margaret M.
Telerobotics and robotics development activities to support NASA's goal of increasing opportunities in space commercialization and exploration are described. The Rockwell International activities center is using robotics to improve efficiency and safety in three related areas: remote control of autonomous systems, automated nondestructive evaluation of aspects of vehicle integrity, and the use of robotics in space vehicle ground reprocessing operations. In the first area, autonomous robotic control, Rockwell is using the control architecture, NASREM, as the foundation for the high level command of robotic tasks. In the second area, we have demonstrated the use of nondestructive evaluation (using acoustic excitation and lasers sensors) to evaluate the integrity of space vehicle surface material bonds, using Orbiter 102 as the test case. In the third area, Rockwell is building an automated version of the present manual tool used for Space Shuttle surface tile re-waterproofing. The tool will be integrated into an orbiter processing robot being developed by a KSC-led team.
Development of a semi-autonomous service robot with telerobotic capabilities
NASA Technical Reports Server (NTRS)
Jones, J. E.; White, D. R.
1987-01-01
The importance to the United States of semi-autonomous systems for application to a large number of manufacturing and service processes is very clear. Two principal reasons emerge as the primary driving forces for development of such systems: enhanced national productivity and operation in environments whch are hazardous to humans. Completely autonomous systems may not currently be economically feasible. However, autonomous systems that operate in a limited operation domain or that are supervised by humans are within the technology capability of this decade and will likely provide reasonable return on investment. The two research and development efforts of autonomy and telerobotics are distinctly different, yet interconnected. The first addresses the communication of an intelligent electronic system with a robot while the second requires human communication and ergonomic consideration. Discussed here are work in robotic control, human/robot team implementation, expert system robot operation, and sensor development by the American Welding Institute, MTS Systems Corporation, and the Colorado School of Mines--Center for Welding Research.
Sandia National Laboratories proof-of-concept robotic security vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrington, J.J.; Jones, D.P.; Klarer, P.R.
1989-01-01
Several years ago Sandia National Laboratories developed a prototype interior robot that could navigate autonomously inside a large complex building to air and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modified andmore » integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities. 2 refs., 3 figs.« less
Control Architecture for Robotic Agent Command and Sensing
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel
2008-01-01
Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).
Experiments with a small behaviour controlled planetary rover
NASA Technical Reports Server (NTRS)
Miller, David P.; Desai, Rajiv S.; Gat, Erann; Ivlev, Robert; Loch, John
1993-01-01
A series of experiments that were performed on the Rocky 3 robot is described. Rocky 3 is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and depositing the sample in a container at its home base. The robot is programmed according to a reactive behavior control paradigm using the ALFA programming language. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mobile robot control systems. The code for Rocky 3 runs on an eight bit processor and uses about ten k of memory.
Bilevel shared control for teleoperators
NASA Technical Reports Server (NTRS)
Hayati, Samad A. (Inventor); Venkataraman, Subramanian T. (Inventor)
1992-01-01
A shared system is disclosed for robot control including integration of the human and autonomous input modalities for an improved control. Autonomously planned motion trajectories are modified by a teleoperator to track unmodelled target motions, while nominal teleoperator motions are modified through compliance to accommodate geometric errors autonomously in the latter. A hierarchical shared system intelligently shares control over a remote robot between the autonomous and teleoperative portions of an overall control system. Architecture is hierarchical, and consists of two levels. The top level represents the task level, while the bottom, the execution level. In space applications, the performance of pure teleoperation systems depend significantly on the communication time delays between the local and the remote sites. Selection/mixing matrices are provided with entries which reflect how each input's signals modality is weighted. The shared control minimizes the detrimental effects caused by these time delays between earth and space.
A Biologically Inspired Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)
2002-01-01
A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
Autonomous intelligent assembly systems LDRD 105746 final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2013-04-01
This report documents a three-year to develop technology that enables mobile robots to perform autonomous assembly tasks in unstructured outdoor environments. This is a multi-tier problem that requires an integration of a large number of different software technologies including: command and control, estimation and localization, distributed communications, object recognition, pose estimation, real-time scanning, and scene interpretation. Although ultimately unsuccessful in achieving a target brick stacking task autonomously, numerous important component technologies were nevertheless developed. Such technologies include: a patent-pending polygon snake algorithm for robust feature tracking, a color grid algorithm for uniquely identification and calibration, a command and control frameworkmore » for abstracting robot commands, a scanning capability that utilizes a compact robot portable scanner, and more. This report describes this project and these developed technologies.« less
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU
Dou, Lihua; Su, Zhong; Liu, Ning
2018-01-01
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515
Autonomous mobile robotic system for supporting counterterrorist and surveillance operations
NASA Astrophysics Data System (ADS)
Adamczyk, Marek; Bulandra, Kazimierz; Moczulski, Wojciech
2017-10-01
Contemporary research on mobile robots concerns applications to counterterrorist and surveillance operations. The goal is to develop systems that are capable of supporting the police and special forces by carrying out such operations. The paper deals with a dedicated robotic system for surveillance of large objects such as airports, factories, military bases, and many others. The goal is to trace unauthorised persons who try to enter to the guarded area, document the intrusion and report it to the surveillance centre, and then warn the intruder by sound messages and eventually subdue him/her by stunning through acoustic effect of great power. The system consists of several parts. An armoured four-wheeled robot assures required mobility of the system. The robot is equipped with a set of sensors including 3D mapping system, IR and video cameras, and microphones. It communicates with the central control station (CCS) by means of a wideband wireless encrypted system. A control system of the robot can operate autonomously, and under remote control. In the autonomous mode the robot follows the path planned by the CCS. Once an intruder has been detected, the robot can adopt its plan to allow tracking him/her. Furthermore, special procedures of treatment of the intruder are applied including warning about the breach of the border of the protected area, and incapacitation of an appropriately selected very loud sound until a patrol of guards arrives. Once getting stuck the robot can contact the operator who can remotely solve the problem the robot is faced with.
Tegotae-based decentralised control scheme for autonomous gait transition of snake-like robots.
Kano, Takeshi; Yoshizawa, Ryo; Ishiguro, Akio
2017-08-04
Snakes change their locomotion patterns in response to the environment. This ability is a motivation for developing snake-like robots with highly adaptive functionality. In this study, a decentralised control scheme of snake-like robots that exhibited autonomous gait transition (i.e. the transition between concertina locomotion in narrow aisles and scaffold-based locomotion on unstructured terrains) was developed. Additionally, the control scheme was validated via simulations. A key insight revealed is that these locomotion patterns were not preprogrammed but emerged by exploiting Tegotae, a concept that describes the extent to which a perceived reaction matches a generated action. Unlike local reflexive mechanisms proposed previously, the Tegotae-based feedback mechanism enabled the robot to 'selectively' exploit environments beneficial for propulsion, and generated reasonable locomotion patterns. It is expected that the results of this study can form the basis to design robots that can work under unpredictable and unstructured environments.
NASA Technical Reports Server (NTRS)
Simmons, Reid; Apfelbaum, David
2005-01-01
Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.
Daluja, Sachin; Golenberg, Lavie; Cao, Alex; Pandya, Abhilash K; Auner, Gregory W; Klein, Michael D
2009-01-01
Robotic surgery has gradually gained acceptance due to its numerous advantages such as tremor filtration, increased dexterity and motion scaling. There remains, however, a significant scope for improvement, especially in the areas of surgeon-robot interface and autonomous procedures. Previous studies have attempted to identify factors affecting a surgeon's performance in a master-slave robotic system by tracking hand movements. These studies relied on conventional optical or magnetic tracking systems, making their use impracticable in the operating room. This study concentrated on building an intrinsic movement capture platform using microcontroller based hardware wired to a surgical robot. Software was developed to enable tracking and analysis of hand movements while surgical tasks were performed. Movement capture was applied towards automated movements of the robotic instruments. By emulating control signals, recorded surgical movements were replayed by the robot's end-effectors. Though this work uses a surgical robot as the platform, the ideas and concepts put forward are applicable to telerobotic systems in general.
ARK: Autonomous mobile robot in an industrial environment
NASA Technical Reports Server (NTRS)
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
1994-01-01
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
NASA Astrophysics Data System (ADS)
Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques
2005-06-01
The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.
Control of a free-flying robot manipulator system
NASA Technical Reports Server (NTRS)
Alexander, H.
1986-01-01
The development of and test control strategies for self-contained, autonomous free flying space robots are discussed. Such a robot would perform operations in space similar to those currently handled by astronauts during extravehicular activity (EVA). Use of robots should reduce the expense and danger attending EVA both by providing assistance to astronauts and in many cases by eliminating altogether the need for human EVA, thus greatly enhancing the scope and flexibility of space assembly and repair activities. The focus of the work is to develop and carry out a program of research with a series of physical Satellite Robot Simulator Vehicles (SRSV's), two-dimensionally freely mobile laboratory models of autonomous free-flying space robots such as might perform extravehicular functions associated with operation of a space station or repair of orbiting satellites. It is planned, in a later phase, to extend the research to three dimensions by carrying out experiments in the Space Shuttle cargo bay.
Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L
2016-03-18
Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .
Task-level control for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid
1994-01-01
Task-level control refers to the integration and coordination of planning, perception, and real-time control to achieve given high-level goals. Autonomous mobile robots need task-level control to effectively achieve complex tasks in uncertain, dynamic environments. This paper describes the Task Control Architecture (TCA), an implemented system that provides commonly needed constructs for task-level control. Facilities provided by TCA include distributed communication, task decomposition and sequencing, resource management, monitoring and exception handling. TCA supports a design methodology in which robot systems are developed incrementally, starting first with deliberative plans that work in nominal situations, and then layering them with reactive behaviors that monitor plan execution and handle exceptions. To further support this approach, design and analysis tools are under development to provide ways of graphically viewing the system and validating its behavior.
Adaptive Behavior for Mobile Robots
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2009-01-01
The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.
Full autonomous microline trace robot
NASA Astrophysics Data System (ADS)
Yi, Deer; Lu, Si; Yan, Yingbai; Jin, Guofan
2000-10-01
Optoelectric inspection may find applications in robotic system. In micro robotic system, smaller optoelectric inspection system is preferred. However, as miniaturizing the size of the robot, the number of the optoelectric detector becomes lack. And lack of the information makes the micro robot difficult to acquire its status. In our lab, a micro line trace robot has been designed, which autonomous acts based on its optoelectric detection. It has been programmed to follow a black line printed on the white colored ground. Besides the optoelectric inspection, logical algorithm in the microprocessor is also important. In this paper, we propose a simply logical algorithm to realize robot's intelligence. The robot's intelligence is based on a AT89C2051 microcontroller which controls its movement. The technical details of the micro robot are as follow: dimension: 30mm*25mm*35*mm; velocity: 60mm/s.
A design strategy for autonomous systems
NASA Technical Reports Server (NTRS)
Forster, Pete
1989-01-01
Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.
Semi autonomous mine detection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Douglas Few; Roelof Versteeg; Herman Herman
2010-04-01
CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIKmore » was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.« less
Adaptive artificial neural network for autonomous robot control
NASA Technical Reports Server (NTRS)
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
1992-01-01
The topics are presented in viewgraph form and include: neural network controller for robot arm positioning with visual feedback; initial training of the arm; automatic recovery from cumulative fault scenarios; and error reduction by iterative fine movements.
Working and Learning with Knowledge in the Lobes of a Humanoid's Mind
NASA Technical Reports Server (NTRS)
Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David
2003-01-01
Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).
A Stigmergic Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.
2004-01-01
In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
Laboratory testing of candidate robotic applications for space
NASA Technical Reports Server (NTRS)
Purves, R. B.
1987-01-01
Robots have potential for increasing the value of man's presence in space. Some categories with potential benefit are: (1) performing extravehicular tasks like satellite and station servicing, (2) supporting the science mission of the station by manipulating experiment tasks, and (3) performing intravehicular activities which would be boring, tedious, exacting, or otherwise unpleasant for astronauts. An important issue in space robotics is selection of an appropriate level of autonomy. In broad terms three levels of autonomy can be defined: (1) teleoperated - an operator explicitly controls robot movement; (2) telerobotic - an operator controls the robot directly, but by high-level commands, without, for example, detailed control of trajectories; and (3) autonomous - an operator supplies a single high-level command, the robot does all necessary task sequencing and planning to satisfy the command. Researchers chose three projects for their exploration of technology and implementation issues in space robots, one each of the three application areas, each with a different level of autonomy. The projects were: (1) satellite servicing - teleoperated; (2) laboratory assistant - telerobotic; and (3) on-orbit inventory manager - autonomous. These projects are described and some results of testing are summarized.
External force/velocity control for an autonomous rehabilitation robot
NASA Astrophysics Data System (ADS)
Saekow, Peerayuth; Neranon, Paramin; Smithmaitrie, Pruittikorn
2018-01-01
Stroke is a primary cause of death and the leading cause of permanent disability in adults. There are many stroke survivors, who live with a variety of levels of disability and always need rehabilitation activities on daily basis. Several studies have reported that usage of rehabilitation robotic devices shows the better improvement outcomes in upper-limb stroke patients than the conventional therapy-nurses or therapists actively help patients with exercise-based rehabilitation. This research focuses on the development of an autonomous robotic trainer designed to guide a stroke patient through an upper-limb rehabilitation task. The robotic device was designed and developed to automate the reaching exercise as mentioned. The designed robotic system is made up of a four-wheel omni-directional mobile robot, an ATI Gamma multi-axis force/torque sensor used to measure contact force and a microcontroller real-time operating system. Proportional plus Integral control was adapted to control the overall performance and stability of the autonomous assistive robot. External force control was successfully implemented to establish the behavioral control strategy for the robot force and velocity control scheme. In summary, the experimental results indicated satisfactorily stable performance of the robot force and velocity control can be considered acceptable. The gain tuning for proportional integral (PI) velocity control algorithms was suitably estimated using the Ziegler-Nichols method in which the optimized proportional and integral gains are 0.45 and 0.11, respectively. Additionally, the PI external force control gains were experimentally tuned using the trial and error method based on a set of experiments which allow a human participant moves the robot along the constrained circular path whilst attempting to minimize the radial force. The performance was analyzed based on the root mean square error (E_RMS) of the radial forces, in which the lower the variation in radial forces, the better the performance of the system. The outstanding performance of the tests as specified by the E_RMS of the radial force was observed with proportional and integral gains of Kp = 0.7 and Ki = 0.75, respectively.
Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System
NASA Technical Reports Server (NTRS)
Braman, Julia M. B.; Murray, Richard M; Wagner, David A.
2007-01-01
Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.
Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter
NASA Technical Reports Server (NTRS)
Rock, Stephen M.
1999-01-01
This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.
Autonomous intelligent military robots: Army ants, killer bees, and cybernetic soldiers
NASA Astrophysics Data System (ADS)
Finkelstein, Robert
The rationale for developing autonomous intelligent robots in the military is to render conventional warfare systems ineffective and indefensible. The Desert Storm operation demonstrated the effectiveness of such systems as unmanned air and ground vehicles and indicated the future possibilities of robotic technology. Robotic military vehicles would have the advantages of expendability, low cost, lower complexity compared to manned systems, survivability, maneuverability, and a capability to share in instantaneous communication and distributed processing of combat information. Basic characteristics of intelligent systems and hierarchical control systems with sensor inputs are described. Genetic algorithms are seen as a means of achieving appropriate levels of intelligence in a robotic system. Potential impacts of robotic technology in the military are outlined.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team KuuKulgur watches as their robots attempt the level one competition during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Retrievers team robot is seen as it attempts the level one challenge the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Sample Return Robot Challenge staff members confer before the team Survey robots makes it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
A robot from the University of Waterloo Robotics Team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Autonomous caregiver following robotic wheelchair
NASA Astrophysics Data System (ADS)
Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary
2011-12-01
In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Solar Thermal Utility-Scale Joint Venture Program (USJVP) Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANCINI,THOMAS R.
2001-04-01
Several years ago Sandia National Laboratories developed a prototype interior robot [1] that could navigate autonomously inside a large complex building to aid and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modifiedmore » and integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities.« less
On-rail solution for autonomous inspections in electrical substations
NASA Astrophysics Data System (ADS)
Silva, Bruno P. A.; Ferreira, Rafael A. M.; Gomes, Selson C.; Calado, Flavio A. R.; Andrade, Roberto M.; Porto, Matheus P.
2018-05-01
This work presents an alternative solution for autonomous inspections in electrical substations. The autonomous system is a robot that moves on rails, collects infrared and visible images of selected targets, also processes the data and predicts the components lifetime. The robot moves on rails to overcome difficulties found in not paved substations commonly encountered in Brazil. We take advantage of using rails to convey the data by them, minimizing the electromagnetic interference, and at the same time transmitting electrical energy to feed the autonomous system. As part of the quality control process, we compared thermographic inspections made by the robot with inspections made by a trained thermographer using a scientific camera Flir® SC660. The results have shown that the robot achieved satisfactory results, identifying components and measuring temperature accurately. The embodied routine considers the weather changes along the day, providing a standard result of the components thermal response, also gives the uncertainty of temperature measurement, contributing to the quality in the decision making process.
NASA Technical Reports Server (NTRS)
Whittaker, William; Dowling, Kevin
1994-01-01
Carnegie Mellon University's Autonomous Planetary Exploration Program (APEX) is currently building the Daedalus robot; a system capable of performing extended autonomous planetary exploration missions. Extended autonomy is an important capability because the continued exploration of the Moon, Mars and other solid bodies within the solar system will probably be carried out by autonomous robotic systems. There are a number of reasons for this - the most important of which are the high cost of placing a man in space, the high risk associated with human exploration and communication delays that make teleoperation infeasible. The Daedalus robot represents an evolutionary approach to robot mechanism design and software system architecture. Daedalus incorporates key features from a number of predecessor systems. Using previously proven technologies, the Apex project endeavors to encompass all of the capabilities necessary for robust planetary exploration. The Ambler, a six-legged walking machine was developed by CMU for demonstration of technologies required for planetary exploration. In its five years of life, the Ambler project brought major breakthroughs in various areas of robotic technology. Significant progress was made in: mechanism and control, by introducing a novel gait pattern (circulating gait) and use of orthogonal legs; perception, by developing sophisticated algorithms for map building; and planning, by developing and implementing the Task Control Architecture to coordinate tasks and control complex system functions. The APEX project is the successor of the Ambler project.
NASA Astrophysics Data System (ADS)
Whittaker, William; Dowling, Kevin
1994-03-01
Carnegie Mellon University's Autonomous Planetary Exploration Program (APEX) is currently building the Daedalus robot; a system capable of performing extended autonomous planetary exploration missions. Extended autonomy is an important capability because the continued exploration of the Moon, Mars and other solid bodies within the solar system will probably be carried out by autonomous robotic systems. There are a number of reasons for this - the most important of which are the high cost of placing a man in space, the high risk associated with human exploration and communication delays that make teleoperation infeasible. The Daedalus robot represents an evolutionary approach to robot mechanism design and software system architecture. Daedalus incorporates key features from a number of predecessor systems. Using previously proven technologies, the Apex project endeavors to encompass all of the capabilities necessary for robust planetary exploration. The Ambler, a six-legged walking machine was developed by CMU for demonstration of technologies required for planetary exploration. In its five years of life, the Ambler project brought major breakthroughs in various areas of robotic technology. Significant progress was made in: mechanism and control, by introducing a novel gait pattern (circulating gait) and use of orthogonal legs; perception, by developing sophisticated algorithms for map building; and planning, by developing and implementing the Task Control Architecture to coordinate tasks and control complex system functions. The APEX project is the successor of the Ambler project.
Self-organized adaptation of a simple neural circuit enables complex robot behaviour
NASA Astrophysics Data System (ADS)
Steingrube, Silke; Timme, Marc; Wörgötter, Florentin; Manoonpong, Poramate
2010-03-01
Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns (for example, orienting, taxis, self-protection and various gaits) and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom.
Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.
Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco
2009-01-01
This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.
Autonomous Soft Robotic Fish Capable of Escape Maneuvers Using Fluidic Elastomer Actuators.
Marchese, Andrew D; Onal, Cagdas D; Rus, Daniela
2014-03-01
In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion.
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.
1987-01-01
An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.
Autonomy in Materials Research: A Case Study in Carbon Nanotube Growth (Postprint)
2016-10-21
built an Autonomous Research System (ARES)—an autonomous research robot capable of first-of-its-kind closed-loop iterative materials experimentation...ARES exploits advances in autonomous robotics , artificial intelligence, data sciences, and high-throughput and in situ techniques, and is able to...roles of humans and autonomous research robots , and for human-machine partnering. We believe autonomous research robots like ARES constitute a
Distributing Planning and Control for Teams of Cooperating Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
2004-07-19
This CRADA project involved the cooperative research of investigators in ORNL's Center for Engineering Science Advanced Research (CESAR) with researchers at Caterpillar, Inc. The subject of the research was the development of cooperative control strategies for autonomous vehicles performing applications of interest to Caterpillar customers. The project involved three Phases of research, conducted over the time period of November 1998 through December 2001. This project led to the successful development of several technologies and demonstrations in realistic simulation that illustrated the effectiveness of our control approaches for distributed planning and cooperation in multi-robot teams. The primary objectives of this researchmore » project were to: (1) Develop autonomous control technologies to enable multiple vehicles to work together cooperatively, (2) Provide the foundational capabilities for a human operator to exercise oversight and guidance during the multi-vehicle task execution, and (3) Integrate these capabilities to the ALLIANCE-based autonomous control approach for multi-robot teams. These objectives have been successfully met with the results implemented and demonstrated in a near real-time multi-vehicle simulation of up to four vehicles performing mission-relevant tasks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; de Saussure, G.; Spelt, P.F.
1988-01-01
This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioningmore » of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.« less
Human-like Compliance for Dexterous Robot Hands
NASA Technical Reports Server (NTRS)
Jau, Bruno M.
1995-01-01
This paper describes the Active Electromechanical Compliance (AEC) system that was developed for the Jau-JPL anthropomorphic robot. The AEC system imitates the functionality of the human muscle's secondary function, which is to control the joint's stiffness: AEC is implemented through servo controlling the joint drive train's stiffness. The control strategy, controlling compliant joints in teleoperation, is described. It enables automatic hybrid position and force control through utilizing sensory feedback from joint and compliance sensors. This compliant control strategy is adaptable for autonomous robot control as well. Active compliance enables dual arm manipulations, human-like soft grasping by the robot hand, and opens the way to many new robotics applications.
Stochastic receding horizon control: application to an octopedal robot
NASA Astrophysics Data System (ADS)
Shah, Shridhar K.; Tanner, Herbert G.
2013-06-01
Miniature autonomous systems are being developed under ARL's Micro Autonomous Systems and Technology (MAST). These systems can only be fitted with a small-size processor, and their motion behavior is inherently uncertain due to manufacturing and platform-ground interactions. One way to capture this uncertainty is through a stochastic model. This paper deals with stochastic motion control design and implementation for MAST- specific eight-legged miniature crawling robots, which have been kinematically modeled as systems exhibiting the behavior of a Dubin's car with stochastic noise. The control design takes the form of stochastic receding horizon control, and is implemented on a Gumstix Overo Fire COM with 720 MHz processor and 512 MB RAM, weighing 5.5 g. The experimental results show the effectiveness of this control law for miniature autonomous systems perturbed by stochastic noise.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The University of Waterloo Robotics Team, from Canada, prepares to place their robot on the start platform during the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
The University of Waterloo Robotics Team, from Ontario, Canada, prepares their robot for the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The team from the University of Waterloo is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
A team KuuKulgur Robot from Estonia is seen on the practice field during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team KuuKulgur is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Sam Ortega, NASA program manager of Centennial Challenges, watches as robots attempt the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
The team Survey robot retrieves a sample during a demonstration of the level two challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The team AERO robot drives off the starting platform during the level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Team Cephal's robot is seen on the starting platform during a rerun of the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Oregon State University Mars Rover Team's robot is seen during level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
Jerry Waechter of team Middleman from Dunedin, Florida, works on their robot named Ro-Bear during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Middleman is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
A robot from the Intrepid Systems team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
A team KuuKulgur robot is seen as it begins the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The team Mountaineers robot is seen as it attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Members of the Oregon State University Mars Rover Team prepare their robot to attempt the level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Stellar Automation Systems team poses for a picture with their robot after attempting the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
The team Survey robot is seen as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
All four of team KuuKulgur's robots are seen as they attempt the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Spectators watch as the team Survey robot conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team Middleman's robot, Ro-Bear, is seen as it starts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
The team Mountaineers robot is seen after picking up the sample during a rerun of the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Two of team KuuKulgur's robots are seen as they attempt a rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Members of team Survey follow their robot as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
A team KuuKulgur robot approaches the sample as it attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
The team survey robot is seen on the starting platform before begging it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Mountaineers team from West Virginia University, watches as their robot attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
The team Survey robot is seen as it conducts a demonstration of the level two challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Team Survey's robot is seen as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Eye-in-Hand Manipulation for Remote Handling: Experimental Setup
NASA Astrophysics Data System (ADS)
Niu, Longchuan; Suominen, Olli; Aref, Mohammad M.; Mattila, Jouni; Ruiz, Emilio; Esque, Salvador
2018-03-01
A prototype for eye-in-hand manipulation in the context of remote handling in the International Thermonuclear Experimental Reactor (ITER)1 is presented in this paper. The setup consists of an industrial robot manipulator with a modified open control architecture and equipped with a pair of stereoscopic cameras, a force/torque sensor, and pneumatic tools. It is controlled through a haptic device in a mock-up environment. The industrial robot controller has been replaced by a single industrial PC running Xenomai that has a real-time connection to both the robot controller and another Linux PC running as the controller for the haptic device. The new remote handling control environment enables further development of advanced control schemes for autonomous and semi-autonomous manipulation tasks. This setup benefits from a stereovision system for accurate tracking of the target objects with irregular shapes. The overall environmental setup successfully demonstrates the required robustness and precision that remote handling tasks need.
NASA Technical Reports Server (NTRS)
Chen, Alexander Y.
1990-01-01
Scientific research associates advanced robotic system (SRAARS) is an intelligent robotic system which has autonomous learning capability in geometric reasoning. The system is equipped with one global intelligence center (GIC) and eight local intelligence centers (LICs). It controls mainly sixteen links with fourteen active joints, which constitute two articulated arms, an extensible lower body, a vision system with two CCD cameras and a mobile base. The on-board knowledge-based system supports the learning controller with model representations of both the robot and the working environment. By consecutive verifying and planning procedures, hypothesis-and-test routines and learning-by-analogy paradigm, the system would autonomously build up its own understanding of the relationship between itself (i.e., the robot) and the focused environment for the purposes of collision avoidance, motion analysis and object manipulation. The intelligence of SRAARS presents a valuable technical advantage to implement robotic systems for space exploration and space station operations.
Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)
NASA Technical Reports Server (NTRS)
Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey
1990-01-01
Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.
Theseus: tethered distributed robotics (TDR)
NASA Astrophysics Data System (ADS)
Digney, Bruce L.; Penzes, Steven G.
2003-09-01
The Defence Research and Development Canada's (DRDC) Autonomous Intelligent System's program conducts research to increase the independence and effectiveness of military vehicles and systems. DRDC-Suffield's Autonomous Land Systems (ALS) is creating new concept vehicles and autonomous control systems for use in outdoor areas, urban streets, urban interiors and urban subspaces. This paper will first give an overview of the ALS program and then give a specific description of the work being done for mobility in urban subspaces. Discussed will be the Theseus: Thethered Distributed Robotics (TDR) system, which will not only manage an unavoidable tether but exploit it for mobility and navigation. Also discussed will be the prototype robot called the Hedgehog, which uses conformal 3D mobility in ducts, sewer pipes, collapsed rubble voids and chimneys.
Status of DoD Robotic Programs
1985-03-01
planning or adhere to previously planned routes. 0 Control. Controls are micro electronics based which provide means of autonomous action directly...KEY No: I 11 1181 1431 OROJECT Titloi ISMART TERRAIN ANALYSIS FOR ROBOTIC SYSTEMS (STARS) PROJECT Not I I CLASSIFICATION: IUCI TASK Titles IAUTOMATIC
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Sam Ortega, NASA program manager for Centennial Challenges, is seen during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Autonomous mobile robot research using the HERMIES-III robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; Beckerman, M.; Spelt, P.F.
1989-01-01
This paper reports on the status and future directions in the research, development and experimental validation of intelligent control techniques for autonomous mobile robots using the HERMIES-III robot at the Center for Engineering Systems Advanced research (CESAR) at Oak Ridge National Laboratory (ORNL). HERMIES-III is the fourth robot in a series of increasingly more sophisticated and capable experimental test beds developed at CESAR. HERMIES-III is comprised of a battery powered, onmi-directional wheeled platform with a seven degree-of-freedom manipulator arm, video cameras, sonar range sensors, laser imaging scanner and a dual computer system containing up to 128 NCUBE nodes in hypercubemore » configuration. All electronics, sensors, computers, and communication equipment required for autonomous operation of HERMIES-III are located on board along with sufficient battery power for three to four hours of operation. The paper first provides a more detailed description of the HERMIES-III characteristics, focussing on the new areas of research and demonstration now possible at CESAR with this new test-bed. The initial experimental program is then described with emphasis placed on autonomous performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES- III). The paper concludes with a discussion of the integration problems and safety considerations necessarily arising from the set-up of an experimental program involving human-scale, multi-autonomous mobile robots performance. 10 refs., 3 figs.« less
Robust Agent Control of an Autonomous Robot with Many Sensors and Actuators
1993-05-01
Overview 22 3.1 Issues of Controller Design ........................ 22 3.2 Robot Behavior Control Philosophy .................. 23 3.3 Overview of the... designed and built by our lab as an 9 Figure 1.1- Hannibal. 10 experimental platform to explore planetary micro-rover control issues (Angle 1991). When... designing the robot, careful consideration was given to mobility, sensing, and robustness issues. Much has been said concerning the advan- tages of
The Unified Behavior Framework for the Simulation of Autonomous Agents
2015-03-01
1980s, researchers have designed a variety of robot control architectures intending to imbue robots with some degree of autonomy. A recently developed ...Identification Friend or Foe viii THE UNIFIED BEHAVIOR FRAMEWORK FOR THE SIMULATION OF AUTONOMOUS AGENTS I. Introduction The development of autonomy has...room for research by utilizing methods like simulation and modeling that consume less time and fewer monetary resources. A recently developed reactive
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
NASA Astrophysics Data System (ADS)
Shah, Hitesh K.; Bahl, Vikas; Martin, Jason; Flann, Nicholas S.; Moore, Kevin L.
2002-07-01
In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) have been funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). One among the several out growths of this work has been the development of a grammar-based approach to intelligent behavior generation for commanding autonomous robotic vehicles. In this paper we describe the use of this grammar for enabling autonomous behaviors. A supervisory task controller (STC) sequences high-level action commands (taken from the grammar) to be executed by the robot. It takes as input a set of goals and a partial (static) map of the environment and produces, from the grammar, a flexible script (or sequence) of the high-level commands that are to be executed by the robot. The sequence is derived by a planning function that uses a graph-based heuristic search (A* -algorithm). Each action command has specific exit conditions that are evaluated by the STC following each task completion or interruption (in the case of disturbances or new operator requests). Depending on the system's state at task completion or interruption (including updated environmental and robot sensor information), the STC invokes a reactive response. This can include sequencing the pending tasks or initiating a re-planning event, if necessary. Though applicable to a wide variety of autonomous robots, an application of this approach is demonstrated via simulations of ODIS, an omni-directional inspection system developed for security applications.
Engineering Sensorial Delay to Control Phototaxis and Emergent Collective Behaviors
NASA Astrophysics Data System (ADS)
Mijalkov, Mite; McDaniel, Austin; Wehr, Jan; Volpe, Giovanni
2016-01-01
Collective motions emerging from the interaction of autonomous mobile individuals play a key role in many phenomena, from the growth of bacterial colonies to the coordination of robotic swarms. For these collective behaviors to take hold, the individuals must be able to emit, sense, and react to signals. When dealing with simple organisms and robots, these signals are necessarily very elementary; e.g., a cell might signal its presence by releasing chemicals and a robot by shining light. An additional challenge arises because the motion of the individuals is often noisy; e.g., the orientation of cells can be altered by Brownian motion and that of robots by an uneven terrain. Therefore, the emphasis is on achieving complex and tunable behaviors from simple autonomous agents communicating with each other in robust ways. Here, we show that the delay between sensing and reacting to a signal can determine the individual and collective long-term behavior of autonomous agents whose motion is intrinsically noisy. We experimentally demonstrate that the collective behavior of a group of phototactic robots capable of emitting a radially decaying light field can be tuned from segregation to aggregation and clustering by controlling the delay with which they change their propulsion speed in response to the light intensity they measure. We track this transition to the underlying dynamics of this system, in particular, to the ratio between the robots' sensorial delay time and the characteristic time of the robots' random reorientation. Supported by numerics, we discuss how the same mechanism can be applied to control active agents, e.g., airborne drones, moving in a three-dimensional space. Given the simplicity of this mechanism, the engineering of sensorial delay provides a potentially powerful tool to engineer and dynamically tune the behavior of large ensembles of autonomous mobile agents; furthermore, this mechanism might already be at work within living organisms such as chemotactic cells.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Russel Howe of team Survey, center, works on a laptop to prepare the team's robot for a demonstration run after the team's robot failed to leave the starting platform during it's attempt at the level two challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Russel Howe of team Survey speaks with Sample Return Robot Challenge staff members after the team's robot failed to leave the starting platform during it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Kenneth Stafford, Assistant Director of Robotics Engineering and Director of the Robotics Resource Center at the Worcester Polytechnic Institute (WPI), verifies the location of the target sample during the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
[Service robots in elderly care. Possible application areas and current state of developments].
Graf, B; Heyer, T; Klein, B; Wallhoff, F
2013-08-01
The term "Service robotics" describes semi- or fully autonomous technical systems able to perform services useful to the well-being of humans. Service robots have the potential to support and disburden both persons in need of care as well as nursing care staff. In addition, they can be used in prevention and rehabilitation in order to reduce or avoid the need for help. Products currently available to support people in domestic environments are mainly cleaning or remote-controlled communication robots. Examples of current research activities are the (further) development of mobile robots as advanced communication assistants or the development of (semi) autonomous manipulation aids and multifunctional household assistants. Transport robots are commonly used in many hospitals. In nursing care facilities, the first evaluations have already been made. So-called emotional robots are now sold as products and can be used for therapeutic, occupational, or entertainment activities.
Learning tactile skills through curious exploration
Pape, Leo; Oddo, Calogero M.; Controzzi, Marco; Cipriani, Christian; Förster, Alexander; Carrozza, Maria C.; Schmidhuber, Jürgen
2012-01-01
We present curiosity-driven, autonomous acquisition of tactile exploratory skills on a biomimetic robot finger equipped with an array of microelectromechanical touch sensors. Instead of building tailored algorithms for solving a specific tactile task, we employ a more general curiosity-driven reinforcement learning approach that autonomously learns a set of motor skills in absence of an explicit teacher signal. In this approach, the acquisition of skills is driven by the information content of the sensory input signals relative to a learner that aims at representing sensory inputs using fewer and fewer computational resources. We show that, from initially random exploration of its environment, the robotic system autonomously develops a small set of basic motor skills that lead to different kinds of tactile input. Next, the system learns how to exploit the learned motor skills to solve supervised texture classification tasks. Our approach demonstrates the feasibility of autonomous acquisition of tactile skills on physical robotic platforms through curiosity-driven reinforcement learning, overcomes typical difficulties of engineered solutions for active tactile exploration and underactuated control, and provides a basis for studying developmental learning through intrinsic motivation in robots. PMID:22837748
Stanford Aerospace Research Laboratory research overview
NASA Technical Reports Server (NTRS)
Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.
1993-01-01
Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator.
The magic glove: a gesture-based remote controller for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark
2012-01-01
This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team AERO, from the Worcester Polytechnic Institute (WPI) transports their robot to the competition field for the level one of the competition during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Robots that will be competing in the Level one competition are seen as they sit in impound prior to the start of competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Ahti Heinla, left, and Sulo Kallas, right, from Estonia, prepare team KuuKulgur's robot for the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
A sample can be seen on the competition field as the team Survey robot conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Jascha Little of team Survey is seen as he follows the teams robot as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The University of California Santa Cruz Rover Team poses for a picture with their robot after attempting the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
The University of California Santa Cruz Rover Team's robot is seen prior to starting it's second attempt at the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Oregon State University Mars Rover Team poses for a picture with their robot following their attempt at the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Jim Rothrock, left, and Carrie Johnson, right, of the Wunderkammer Laboratory team pose for a picture with their robot after attempting the level one competition during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
The Oregon State University Mars Rover Team follows their robot on the practice field during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The Oregon State University Mars Rover Team is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Jerry Waechter of team Middleman from Dunedin, Florida, speaks about his team's robot, Ro-Bear, as it makes it attempt at the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
The Oregon State University Mars Rover Team, from Corvallis, Oregon, follows their robot on the practice field during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The Oregon State University Mars Rover Team is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Cooperative Autonomous Robots for Reconnaissance
2009-03-06
REPORT Cooperative Autonomous Robots for Reconnaissance 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Collaborating mobile robots equipped with WiFi ...Cooperative Autonomous Robots for Reconnaissance Report Title ABSTRACT Collaborating mobile robots equipped with WiFi transceivers are configured as a mobile...equipped with WiFi transceivers are configured as a mobile ad-hoc network. Algorithms are developed to take advantage of the distributed processing
A Feedforward Control Approach to the Local Navigation Problem for Autonomous Vehicles
1994-05-02
AD-A282 787 " A Feedforward Control Approach to the Local Navigation Problem for Autonomous Vehicles Alonzo Kelly CMU-RI-TR-94-17 The Robotics...follow, or a direction to prefer, it cannot generate its own strategic goals. Therefore, it solves the local planning problem for autonomous vehicles . The... autonomous vehicles . It is intelligent because it uses range images that are generated from either a laser rangefinder or a stereo triangulation
Bilevel Shared Control Of A Remote Robotic Manipulator
NASA Technical Reports Server (NTRS)
Hayati, Samad A.; Venkataraman, Subramanian T.
1992-01-01
Proposed concept blends autonomous and teleoperator control modes, each overcoming deficiencies of the other. Both task-level and execution-level functions performed at local and remote sites. Applicable to systems with long communication delay between local and remote sites or systems intended to function partly autonomously.
Sato, Takahide; Kano, Takeshi; Ishiguro, Akio
2011-06-01
A systematic method for an autonomous decentralized control system is still lacking, despite its appealing concept. In order to alleviate this, we focused on the amoeboid locomotion of the true slime mold, and extracted a design scheme for the decentralized control mechanism that leads to adaptive behavior for the entire system, based on the so-called discrepancy function. In this paper, we intensively investigate the universality of this design scheme by applying it to a different type of locomotion based on a 'synthetic approach'. As a first step, we implement this design scheme to the control of a real physical two-dimensional serpentine robot that exhibits slithering locomotion. The experimental results show that the robot exhibits adaptive behavior and responds to the environmental changes; it is also robust against malfunctions of the body segments due to the local sensory feedback control that is based on the discrepancy function. We expect the results to shed new light on the methodology of autonomous decentralized control systems.
Information Foraging and Change Detection for Automated Science Exploration
NASA Technical Reports Server (NTRS)
Furlong, P. Michael; Dille, Michael
2016-01-01
This paper presents a new algorithm for autonomous on-line exploration in unknown environments. The objective is to free remote scientists from possibly-infeasible extensive preliminary site investigation prior to sending robotic agents. We simulate a common exploration task for an autonomous robot sampling the environment at various locations and compare performance against simpler control strategies. An extension is proposed and evaluated that further permits operation in the presence of environmental variability in which the robot encounters a change in the distribution underlying sampling targets. Experimental results indicate a strong improvement in performance across varied parameter choices for the scenario.
NASA Astrophysics Data System (ADS)
Pini, Giovanni; Tuci, Elio
2008-06-01
In biology/psychology, the capability of natural organisms to learn from the observation/interaction with conspecifics is referred to as social learning. Roboticists have recently developed an interest in social learning, since it might represent an effective strategy to enhance the adaptivity of a team of autonomous robots. In this study, we show that a methodological approach based on artifcial neural networks shaped by evolutionary computation techniques can be successfully employed to synthesise the individual and social learning mechanisms for robots required to learn a desired action (i.e. phototaxis or antiphototaxis).
Controlling multiple security robots in a warehouse environment
NASA Technical Reports Server (NTRS)
Everett, H. R.; Gilbreath, G. A.; Heath-Pastore, T. A.; Laird, R. T.
1994-01-01
The Naval Command Control and Ocean Surveillance Center (NCCOSC) has developed an architecture to provide coordinated control of multiple autonomous vehicles from a single host console. The multiple robot host architecture (MRHA) is a distributed multiprocessing system that can be expanded to accommodate as many as 32 robots. The initial application will employ eight Cybermotion K2A Navmaster robots configured as remote security platforms in support of the Mobile Detection Assessment and Response System (MDARS) Program. This paper discusses developmental testing of the MRHA in an operational warehouse environment, with two actual and four simulated robotic platforms.
A unified teleoperated-autonomous dual-arm robotic system
NASA Technical Reports Server (NTRS)
Hayati, Samad; Lee, Thomas S.; Tso, Kam Sing; Backes, Paul G.; Lloyd, John
1991-01-01
A description is given of complete robot control facility built as part of a NASA telerobotics program to develop a state-of-the-art robot control environment for performing experiments in the repair and assembly of spacelike hardware to gain practical knowledge of such work and to improve the associated technology. The basic architecture of the manipulator control subsystem is presented. The multiarm Robot Control C Library (RCCL), a key software component of the system, is described, along with its implementation on a Sun-4 computer. The system's simulation capability is also described, and the teleoperation and shared control features are explained.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
The University of California Santa Cruz Rover Team prepares their rover for the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Worcester Polytechnic Institute (WPI) President Laurie Leshin, speaks at a breakfast opening the TouchTomorrow Festival, held in conjunction with the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
David Miller, NASA Chief Technologist, speaks at a breakfast opening the TouchTomorrow Festival, held in conjunction with the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The entrance to Institute Park is seen during the level one challenge as during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Sam Ortega, NASA Centennial Challenges Program Manager, speaks at a breakfast opening the TouchTomorrow Festival, held in conjunction with the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
James Leopore, of team Fetch, from Alexandria, Virginia, speaks with judges as he prepares for the NASA 2014 Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Fetch is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Autonomous flight control for a Titan exploration aerobot
NASA Technical Reports Server (NTRS)
Elfes, Alberto; Montgomery, James F.; Hall, Jeffrey L.; Joshi, Sanjay S.; Payne, Jeffrey; Bergh, Charles F.
2005-01-01
Robotic lighter-than-air vehicles, or aerobots, provide strategic platform for the exploration of planets and moons with an atmosphere, such as Venus, Mars, Titan and the gas giants. In this paper, we discuss steps towards the development of an autonomy architecture, and concentrate on the autonomous fight control subsystem.
Bing, Zhenshan; Cheng, Long; Chen, Guang; Röhrbein, Florian; Huang, Kai; Knoll, Alois
2017-04-04
Snake-like robots with 3D locomotion ability have significant advantages of adaptive travelling in diverse complex terrain over traditional legged or wheeled mobile robots. Despite numerous developed gaits, these snake-like robots suffer from unsmooth gait transitions by changing the locomotion speed, direction, and body shape, which would potentially cause undesired movement and abnormal torque. Hence, there exists a knowledge gap for snake-like robots to achieve autonomous locomotion. To address this problem, this paper presents the smooth slithering gait transition control based on a lightweight central pattern generator (CPG) model for snake-like robots. First, based on the convergence behavior of the gradient system, a lightweight CPG model with fast computing time was designed and compared with other widely adopted CPG models. Then, by reshaping the body into a more stable geometry, the slithering gait was modified, and studied based on the proposed CPG model, including the gait transition of locomotion speed, moving direction, and body shape. In contrast to sinusoid-based method, extensive simulations and prototype experiments finally demonstrated that smooth slithering gait transition can be effectively achieved using the proposed CPG-based control method without generating undesired locomotion and abnormal torque.
Technologies for Human Exploration
NASA Technical Reports Server (NTRS)
Drake, Bret G.
2014-01-01
Access to Space, Chemical Propulsion, Advanced Propulsion, In-Situ Resource Utilization, Entry, Descent, Landing and Ascent, Humans and Robots Working Together, Autonomous Operations, In-Flight Maintenance, Exploration Mobility, Power Generation, Life Support, Space Suits, Microgravity Countermeasures, Autonomous Medicine, Environmental Control.
Modeling and control of tissue compression and temperature for automation in robot-assisted surgery.
Sinha, Utkarsh; Li, Baichun; Sankaranarayanan, Ganesh
2014-01-01
Robotic surgery is being used widely due to its various benefits that includes reduced patient trauma and increased dexterity and ergonomics for the operating surgeon. Making the whole or part of the surgical procedure autonomous increases patient safety and will enable the robotic surgery platform to be used in telesurgery. In this work, an Electrosurgery procedure that involves tissue compression and application of heat such as the coaptic vessel closure has been automated. A MIMO nonlinear model characterizing the tissue stiffness and conductance under compression was feedback linearized and tuned PID controllers were used to control the system to achieve both the displacement and temperature constraints. A reference input for both the constraints were chosen as a ramp and hold trajectory which reflect the real constraints that exist in an actual surgical procedure. Our simulations showed that the controllers successfully tracked the reference trajectories with minimal deviation and in finite time horizon. The MIMO system with controllers developed in this work can be used to drive a surgical robot autonomously and perform electrosurgical procedures such as coaptic vessel closures.
NASA Astrophysics Data System (ADS)
Dong, Gangqi; Zhu, Z. H.
2016-04-01
This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.
A Space Station robot walker and its shared control software
NASA Technical Reports Server (NTRS)
Xu, Yangsheng; Brown, Ben; Aoki, Shigeru; Yoshida, Tetsuji
1994-01-01
In this paper, we first briefly overview the update of the self-mobile space manipulator (SMSM) configuration and testbed. The new robot is capable of projecting cameras anywhere interior or exterior of the Space Station Freedom (SSF), and will be an ideal tool for inspecting connectors, structures, and other facilities on SSF. Experiments have been performed under two gravity compensation systems and a full-scale model of a segment of SSF. This paper presents a real-time shared control architecture that enables the robot to coordinate autonomous locomotion and teleoperation input for reliable walking on SSF. Autonomous locomotion can be executed based on a CAD model and off-line trajectory planning, or can be guided by a vision system with neural network identification. Teleoperation control can be specified by a real-time graphical interface and a free-flying hand controller. SMSM will be a valuable assistant for astronauts in inspection and other EVA missions.
Experiments in Nonlinear Adaptive Control of Multi-Manipulator, Free-Flying Space Robots
NASA Technical Reports Server (NTRS)
Chen, Vincent Wei-Kang
1992-01-01
Sophisticated robots can greatly enhance the role of humans in space by relieving astronauts of low level, tedious assembly and maintenance chores and allowing them to concentrate on higher level tasks. Robots and astronauts can work together efficiently, as a team; but the robot must be capable of accomplishing complex operations and yet be easy to use. Multiple cooperating manipulators are essential to dexterity and can broaden greatly the types of activities the robot can achieve; adding adaptive control can ease greatly robot usage by allowing the robot to change its own controller actions, without human intervention, in response to changes in its environment. Previous work in the Aerospace Robotics Laboratory (ARL) have shown the usefulness of a space robot with cooperating manipulators. The research presented in this dissertation extends that work by adding adaptive control. To help achieve this high level of robot sophistication, this research made several advances to the field of nonlinear adaptive control of robotic systems. A nonlinear adaptive control algorithm developed originally for control of robots, but requiring joint positions as inputs, was extended here to handle the much more general case of manipulator endpoint-position commands. A new system modelling technique, called system concatenation was developed to simplify the generation of a system model for complicated systems, such as a free-flying multiple-manipulator robot system. Finally, the task-space concept was introduced wherein the operator's inputs specify only the robot's task. The robot's subsequent autonomous performance of each task still involves, of course, endpoint positions and joint configurations as subsets. The combination of these developments resulted in a new adaptive control framework that is capable of continuously providing full adaptation capability to the complex space-robot system in all modes of operation. The new adaptive control algorithm easily handles free-flying systems with multiple, interacting manipulators, and extends naturally to even larger systems. The new adaptive controller was experimentally demonstrated on an ideal testbed in the ARL-A first-ever experimental model of a multi-manipulator, free-flying space robot that is capable of capturing and manipulating free-floating objects without requiring human assistance. A graphical user interface enhanced the robot usability: it enabled an operator situated at a remote location to issue high-level task description commands to the robot, and to monitor robot activities as it then carried out each assignment autonomously.
GPS Enabled Semi-Autonomous Robot
2017-09-01
equal and the goal has not yet been reached (i.e., any time the robot has reached a local minimum), and direct the robot to travel in a specific...whether the robot was turning or not. The challenge is overcome by ensuring the robot travels at its maximum speed at all times . Further research into...robot’s fixed reference frame was recalculated each time through the control loop. If the encoder data allows for the robot to appear to have travelled
Long-Term Simultaneous Localization and Mapping in Dynamic Environments
2015-01-01
core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the...and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot???s sensory...distributed stochastic neighbor embedding x ABSTRACT One of the core competencies required for autonomous mobile robotics is the ability to use sensors
High level intelligent control of telerobotics systems
NASA Technical Reports Server (NTRS)
Mckee, James
1988-01-01
A high level robot command language is proposed for the autonomous mode of an advanced telerobotics system and a predictive display mechanism for the teleoperational model. It is believed that any such system will involve some mixture of these two modes, since, although artificial intelligence can facilitate significant autonomy, a system that can resort to teleoperation will always have the advantage. The high level command language will allow humans to give the robot instructions in a very natural manner. The robot will then analyze these instructions to infer meaning so that is can translate the task into lower level executable primitives. If, however, the robot is unable to perform the task autonomously, it will switch to the teleoperational mode. The time delay between control movement and actual robot movement has always been a problem in teleoperations. The remote operator may not actually see (via a monitor) the results of high actions for several seconds. A computer generated predictive display system is proposed whereby the operator can see a real-time model of the robot's environment and the delayed video picture on the monitor at the same time.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Members of the Mountaineers team from West Virginia University celebrate after their robot returned to the starting platform after picking up the sample during a rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
A pair of Worcester Polytechnic Institute (WPI) students walk past a pair of team KuuKulgur's robots on the campus quad, during a final tuneup before the start of competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team KuuKulgur is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Design and implementation of a robot control system with traded and shared control capability
NASA Technical Reports Server (NTRS)
Hayati, S.; Venkataraman, S. T.
1989-01-01
Preliminary results are reported from efforts to design and develop a robotic system that will accept and execute commands from either a six-axis teleoperator device or an autonomous planner, or combine the two. Such a system should have both traded as well as shared control capability. A sharing strategy is presented whereby the overall system, while retaining positive features of teleoperated and autonomous operation, loses its individual negative features. A two-tiered shared control architecture is considered here, consisting of a task level and a servo level. Also presented is a computer architecture for the implementation of this system, including a description of the hardware and software.
Development of a soft untethered robot using artificial muscle actuators
NASA Astrophysics Data System (ADS)
Cao, Jiawei; Qin, Lei; Lee, Heow Pueh; Zhu, Jian
2017-04-01
Soft robots have attracted much interest recently, due to their potential capability to work effectively in unstructured environment. Soft actuators are key components in soft robots. Dielectric elastomer actuators are one class of soft actuators, which can deform in response to voltage. Dielectric elastomer actuators exhibit interesting attributes including large voltage-induced deformation and high energy density. These attributes make dielectric elastomer actuators capable of functioning as artificial muscles for soft robots. It is significant to develop untethered robots, since connecting the cables to external power sources greatly limits the robots' functionalities, especially autonomous movements. In this paper we develop a soft untethered robot based on dielectric elastomer actuators. This robot mainly consists of a deformable robotic body and two paper-based feet. The robotic body is essentially a dielectric elastomer actuator, which can expand or shrink at voltage on or off. In addition, the two feet can achieve adhesion or detachment based on the mechanism of electroadhesion. In general, the entire robotic system can be controlled by electricity or voltage. By optimizing the mechanical design of the robot (the size and weight of electric circuits), we put all these components (such as batteries, voltage amplifiers, control circuits, etc.) onto the robotic feet, and the robot is capable of realizing autonomous movements. Experiments are conducted to study the robot's locomotion. Finite element method is employed to interpret the deformation of dielectric elastomer actuators, and the simulations are qualitatively consistent with the experimental observations.
[Mobile autonomous robots-Possibilities and limits].
Maehle, E; Brockmann, W; Walthelm, A
2002-02-01
Besides industrial robots, which today are firmly established in production processes, service robots are becoming more and more important. They shall provide services for humans in different areas of their professional and everyday environment including medicine. Most of these service robots are mobile which requires an intelligent autonomous behaviour. After characterising the different kinds of robots the relevant paradigms of intelligent autonomous behaviour for mobile robots are critically discussed in this paper and illustrated by three concrete examples of robots realized in Lübeck. In addition a short survey of actual kinds of surgical robots as well as an outlook to future developments is given.
Autonomous bone reposition around anatomical landmark for robot-assisted orthognathic surgery.
Woo, Sang-Yoon; Lee, Sang-Jeong; Yoo, Ji-Yong; Han, Jung-Joon; Hwang, Soon-Jung; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Yi, Won-Jin
2017-12-01
The purpose of this study was to develop a new method for enabling a robot to assist a surgeon in repositioning a bone segment to accurately transfer a preoperative virtual plan into the intraoperative phase in orthognathic surgery. We developed a robot system consisting of an arm with six degrees of freedom, a robot motion-controller, and a PC. An end-effector at the end of the robot arm transferred the movements of the robot arm to the patient's jawbone. The registration between the robot and CT image spaces was performed completely preoperatively, and the intraoperative registration could be finished using only position changes of the tracking tools at the robot end-effector and the patient's splint. The phantom's maxillomandibular complex (MMC) connected to the robot's end-effector was repositioned autonomously by the robot movements around an anatomical landmark of interest based on the tool center point (TCP) principle. The robot repositioned the MMC around the TCP of the incisor of the maxilla and the pogonion of the mandible following plans for real orthognathic patients. The accuracy of the robot's repositioning increased when an anatomical landmark for the TCP was close to the registration fiducials. In spite of this influence, we could increase the repositioning accuracy at the landmark by using the landmark itself as the TCP. With its ability to incorporate virtual planning using a CT image and autonomously execute the plan around an anatomical landmark of interest, the robot could help surgeons reposition bones more accurately and dexterously. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Control of free-flying space robot manipulator systems
NASA Technical Reports Server (NTRS)
Cannon, Robert H., Jr.
1990-01-01
New control techniques for self contained, autonomous free flying space robots were developed and tested experimentally. Free flying robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require human extravehicular activity (EVA). A set of research projects were developed and carried out using lab models of satellite robots and a flexible manipulator. The second generation space robot models use air cushion vehicle (ACV) technology to simulate in 2-D the drag free, zero g conditions of space. The current work is divided into 5 major projects: Global Navigation and Control of a Free Floating Robot, Cooperative Manipulation from a Free Flying Robot, Multiple Robot Cooperation, Thrusterless Robotic Locomotion, and Dynamic Payload Manipulation. These projects are examined in detail.
Robots, systems, and methods for hazard evaluation and visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.
A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Dorothy Rasco, NASA Deputy Associate Administrator for the Space Technology Mission Directorate, speaks at the TouchTomorrow Festival, held in conjunction with the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Sam Ortega, NASA program manager for Centennial Challenges, is interviewed by a member of the media before the start of level two competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Autonomous navigation system and method
Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID
2009-09-08
A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.
Multirobot Lunar Excavation and ISRU Using Artificial-Neural-Tissue Controllers
NASA Astrophysics Data System (ADS)
Thangavelautham, Jekanthan; Smith, Alexander; Abu El Samid, Nader; Ho, Alexander; Boucher, Dale; Richard, Jim; D'Eleuterio, Gabriele M. T.
2008-01-01
Automation of site preparation and resource utilization on the Moon with teams of autonomous robots holds considerable promise for establishing a lunar base. Such multirobot autonomous systems would require limited human support infrastructure, complement necessary manned operations and reduce overall mission risk. We present an Artificial Neural Tissue (ANT) architecture as a control system for autonomous multirobot excavation tasks. An ANT approach requires much less human supervision and pre-programmed human expertise than previous techniques. Only a single global fitness function and a set of allowable basis behaviors need be specified. An evolutionary (Darwinian) selection process is used to `breed' controllers for the task at hand in simulation and the fittest controllers are transferred onto hardware for further validation and testing. ANT facilitates `machine creativity', with the emergence of novel functionality through a process of self-organized task decomposition of mission goals. ANT based controllers are shown to exhibit self-organization, employ stigmergy (communication mediated through the environment) and make use of templates (unlabeled environmental cues). With lunar in-situ resource utilization (ISRU) efforts in mind, ANT controllers have been tested on a multirobot excavation task in which teams of robots with no explicit supervision can successfully avoid obstacles, interpret excavation blueprints, perform layered digging, avoid burying or trapping other robots and clear/maintain digging routes.
Ophiuroid robot that self-organizes periodic and non-periodic arm movements.
Kano, Takeshi; Suzuki, Shota; Watanabe, Wataru; Ishiguro, Akio
2012-09-01
Autonomous decentralized control is a key concept for understanding the mechanism underlying adaptive and versatile locomotion of animals. Although the design of an autonomous decentralized control system that ensures adaptability by using coupled oscillators has been proposed previously, it cannot comprehensively reproduce the versatility of animal behaviour. To tackle this problem, we focus on using ophiuroids as a simple model that exhibits versatile locomotion including periodic and non-periodic arm movements. Our existing model for ophiuroid locomotion uses an active rotator model that describes both oscillatory and excitatory properties. In this communication, we develop an ophiuroid robot to confirm the validity of this proposed model in the real world. We show that the robot travels by successfully coordinating periodic and non-periodic arm movements in response to external stimuli.
Development and training of a learning expert system in an autonomous mobile robot via simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Lyness, E.; DeSaussure, G.
1989-11-01
The Center for Engineering Systems Advanced Research (CESAR) conducts basic research in the area of intelligent machines. Recently at CESAR a learning expert system was created to operate on board an autonomous robot working at a process control panel. The authors discuss two-computer simulation system used to create, evaluate and train this learning system. The simulation system has a graphics display of the current status of the process being simulated, and the same program which does the simulating also drives the actual control panel. Simulation results were validated on the actual robot. The speed and safety values of using amore » computerized simulator to train a learning computer, and future uses of the simulation system, are discussed.« less
Design and control of an IPMC wormlike robot.
Arena, Paolo; Bonomo, Claudia; Fortuna, Luigi; Frasca, Mattia; Graziani, Salvatore
2006-10-01
This paper presents an innovative wormlike robot controlled by cellular neural networks (CNNs) and made of an ionic polymer-metal composite (IPMC) self-actuated skeleton. The IPMC actuators, from which it is made of, are new materials that behave similarly to biological muscles. The idea that inspired the work is the possibility of using IPMCs to design autonomous moving structures. CNNs have already demonstrated their powerfulness as new structures for bio-inspired locomotion generation and control. The control scheme for the proposed IPMC moving structure is based on CNNs. The wormlike robot is totally made of IPMCs, and each actuator has to carry its own weight. All the actuators are connected together without using any other additional part, thereby constituting the robot structure itself. Worm locomotion is performed by bending the actuators sequentially from "tail" to "head," imitating the traveling wave observed in real-world undulatory locomotion. The activation signals are generated by a CNN. In the authors' opinion, the proposed strategy represents a promising solution in the field of autonomous and light structures that are capable of reconfiguring and moving in line with spatial-temporal dynamics generated by CNNs.
The Jet Propulsion Laboratory shared control architecture and implementation
NASA Technical Reports Server (NTRS)
Backes, Paul G.; Hayati, Samad
1990-01-01
A hardware and software environment for shared control of telerobot task execution has been implemented. Modes of task execution range from fully teleoperated to fully autonomous as well as shared where hand controller inputs from the human operator are mixed with autonomous system inputs in real time. The objective of the shared control environment is to aid the telerobot operator during task execution by merging real-time operator control from hand controllers with autonomous control to simplify task execution for the operator. The operator is the principal command source and can assign as much autonomy for a task as desired. The shared control hardware environment consists of two PUMA 560 robots, two 6-axis force reflecting hand controllers, Universal Motor Controllers for each of the robots and hand controllers, a SUN4 computer, and VME chassis containing 68020 processors and input/output boards. The operator interface for shared control, the User Macro Interface (UMI), is a menu driven interface to design a task and assign the levels of teleoperated and autonomous control. The operator also sets up the system monitor which checks safety limits during task execution. Cartesian-space degrees of freedom for teleoperated and/or autonomous control inputs are selected within UMI as well as the weightings for the teleoperation and autonmous inputs. These are then used during task execution to determine the mix of teleoperation and autonomous inputs. Some of the autonomous control primitives available to the user are Joint-Guarded-Move, Cartesian-Guarded-Move, Move-To-Touch, Pin-Insertion/Removal, Door/Crank-Turn, Bolt-Turn, and Slide. The operator can execute a task using pure teleoperation or mix control execution from the autonomous primitives with teleoperated inputs. Presently the shared control environment supports single arm task execution. Work is presently underway to provide the shared control environment for dual arm control. Teleoperation during shared control is only Cartesian space control and no force-reflection is provided. Force-reflecting teleoperation and joint space operator inputs are planned extensions to the environment.
NASA Technical Reports Server (NTRS)
Ippolito, Corey; Plice, Laura; Pisanich, Greg
2003-01-01
The BEES (Bio-inspired Engineering for Exploration Systems) for Mars project at NASA Ames Research Center has the goal of developing bio-inspired flight control strategies to enable aerial explorers for Mars scientific investigations. This paper presents a summary of our ongoing research into biologically inspired system designs for control of unmanned autonomous aerial vehicle communities for Mars exploration. First, we present cooperative design considerations for robotic explorers based on the holarchical nature of biological systems and communities. Second, an outline of an architecture for cognitive decision making and control of individual robotic explorers is presented, modeled after the emotional nervous system of cognitive biological systems. Keywords: Holarchy, Biologically Inspired, Emotional UAV Flight Control
Research into command, control, and communications in space construction
NASA Technical Reports Server (NTRS)
Davis, Randal
1990-01-01
Coordinating and controlling large numbers of autonomous or semi-autonomous robot elements in a space construction activity will present problems that are very different from most command and control problems encountered in the space business. As part of our research into the feasibility of robot constructors in space, the CSC Operations Group is examining a variety of command, control, and communications (C3) issues. Two major questions being asked are: can we apply C3 techniques and technologies already developed for use in space; and are there suitable terrestrial solutions for extraterrestrial C3 problems? An overview of the control architectures, command strategies, and communications technologies that we are examining is provided and plans for simulations and demonstrations of our concepts are described.
A fuzzy logic controller for an autonomous mobile robot
NASA Technical Reports Server (NTRS)
Yen, John; Pfluger, Nathan
1993-01-01
The ability of a mobile robot system to plan and move intelligently in a dynamic system is needed if robots are to be useful in areas other than controlled environments. An example of a use for this system is to control an autonomous mobile robot in a space station, or other isolated area where it is hard or impossible for human life to exist for long periods of time (e.g., Mars). The system would allow the robot to be programmed to carry out the duties normally accomplished by a human being. Some of the duties that could be accomplished include operating instruments, transporting objects, and maintenance of the environment. The main focus of our early work has been on developing a fuzzy controller that takes a path and adapts it to a given environment. The robot only uses information gathered from the sensors, but retains the ability to avoid dynamically placed obstacles near and along the path. Our fuzzy logic controller is based on the following algorithm: (1) determine the desired direction of travel; (2) determine the allowed direction of travel; and (3) combine the desired and allowed directions in order to determine a direciton that is both desired and allowed. The desired direction of travel is determined by projecting ahead to a point along the path that is closer to the goal. This gives a local direction of travel for the robot and helps to avoid obstacles.
Autonomous undulatory serpentine locomotion utilizing body dynamics of a fluidic soft robot.
Onal, Cagdas D; Rus, Daniela
2013-06-01
Soft robotics offers the unique promise of creating inherently safe and adaptive systems. These systems bring man-made machines closer to the natural capabilities of biological systems. An important requirement to enable self-contained soft mobile robots is an on-board power source. In this paper, we present an approach to create a bio-inspired soft robotic snake that can undulate in a similar way to its biological counterpart using pressure for actuation power, without human intervention. With this approach, we develop an autonomous soft snake robot with on-board actuation, power, computation and control capabilities. The robot consists of four bidirectional fluidic elastomer actuators in series to create a traveling curvature wave from head to tail along its body. Passive wheels between segments generate the necessary frictional anisotropy for forward locomotion. It takes 14 h to build the soft robotic snake, which can attain an average locomotion speed of 19 mm s(-1).
SyRoTek--Distance Teaching of Mobile Robotics
ERIC Educational Resources Information Center
Kulich, M.; Chudoba, J.; Kosnar, K.; Krajnik, T.; Faigl, J.; Preucil, L.
2013-01-01
E-learning is a modern and effective approach for training in various areas and at different levels of education. This paper gives an overview of SyRoTek, an e-learning platform for mobile robotics, artificial intelligence, control engineering, and related domains. SyRoTek provides remote access to a set of fully autonomous mobile robots placed in…
Remote Control and Children's Understanding of Robots
ERIC Educational Resources Information Center
Somanader, Mark C.; Saylor, Megan M.; Levin, Daniel T.
2011-01-01
Children use goal-directed motion to classify agents as living things from early in infancy. In the current study, we asked whether preschoolers are flexible in their application of this criterion by introducing them to robots that engaged in goal-directed motion. In one case the robot appeared to move fully autonomously, and in the other case it…
Remote-controlled vision-guided mobile robot system
NASA Astrophysics Data System (ADS)
Ande, Raymond; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.
Structured Kernel Subspace Learning for Autonomous Robot Navigation.
Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai
2018-02-14
This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.
Telerobotic controller development
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, Ken; Rhoades, Don
1987-01-01
To meet NASA's space station's needs and growth, a modular and generic approach to robotic control which provides near-term implementation with low development cost and capability for growth into more autonomous systems was developed. The method uses a vision based robotic controller and compliant hand integrated with the Remote Manipulator System arm on the Orbiter. A description of the hardware and its system integration is presented.
Essential Kinematics for Autonomous Vehicles
1994-05-02
AD-.A282 456 Essential Kinematics for Autonomous Vehicles Alonzo Kelly DTICCMU-RI-TR-94- 14 AU 031994 F The Robotics Institute Carnegie Mellon...kit of concepts and techniques that will equip the reader to master a large class of kinematic modelling problems. Control of autonomous vehicles in 3D...transformation from system ’a’ to system b’. Essential Kinematics for Autonomous Vehicles page 1. The specification of derivatives will be necessarily
2006-07-01
mobility in complex terrain, robot system designers are still seeking workable processes for mapbuilding, with enduring problems that either require...human) robot system designers /users can seek to control the consequences of robot actions, deliberate or otherwise. A notable particular application...operators a sufficient feeling of presence; if not, robot system designers will have to provide autonomy to the robot to make up for the gaps in human input
DEMONSTRATION OF AUTONOMOUS AIR MONITORING THROUGH ROBOTICS
This project included modifying an existing teleoperated robot to include autonomous navigation, large object avoidance, and air monitoring and demonstrating that prototype robot system in indoor and outdoor environments. An existing teleoperated "Surveyor" robot developed by ARD...
Autonomous Lawnmower using FPGA implementation.
NASA Astrophysics Data System (ADS)
Ahmad, Nabihah; Lokman, Nabill bin; Helmy Abd Wahab, Mohd
2016-11-01
Nowadays, there are various types of robot have been invented for multiple purposes. The robots have the special characteristic that surpass the human ability and could operate in extreme environment which human cannot endure. In this paper, an autonomous robot is built to imitate the characteristic of a human cutting grass. A Field Programmable Gate Array (FPGA) is used to control the movements where all data and information would be processed. Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) is used to describe the hardware using Quartus II software. This robot has the ability of avoiding obstacle using ultrasonic sensor. This robot used two DC motors for its movement. It could include moving forward, backward, and turning left and right. The movement or the path of the automatic lawn mower is based on a path planning technique. Four Global Positioning System (GPS) plot are set to create a boundary. This to ensure that the lawn mower operates within the area given by user. Every action of the lawn mower is controlled by the FPGA DE' Board Cyclone II with the help of the sensor. Furthermore, Sketch Up software was used to design the structure of the lawn mower. The autonomous lawn mower was able to operate efficiently and smoothly return to coordinated paths after passing the obstacle. It uses 25% of total pins available on the board and 31% of total Digital Signal Processing (DSP) blocks.
Validating a UAV artificial intelligence control system using an autonomous test case generator
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Huber, Justin
2013-05-01
The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.
Agent-based human-robot interaction of a combat bulldozer
NASA Astrophysics Data System (ADS)
Granot, Reuven; Feldman, Maxim
2004-09-01
A small-scale supervised autonomous bulldozer in a remote site was developed to experience agent based human intervention. The model is based on Lego Mindstorms kit and represents combat equipment, whose job performance does not require high accuracy. The model enables evaluation of system response for different operator interventions, as well as for a small colony of semiautonomous dozers. The supervising human may better react than a fully autonomous system to unexpected contingent events, which are a major barrier to implement full autonomy. The automation is introduced as improved Man Machine Interface (MMI) by developing control agents as intelligent tools to negotiate between human requests and task level controllers as well as negotiate with other elements of the software environment. Current UGVs demand significant communication resources and constant human operation. Therefore they will be replaced by semi-autonomous human supervisory controlled systems (telerobotic). For human intervention at the low layers of the control hierarchy we suggest a task oriented control agent to take care of the fluent transition between the state in which the robot operates and the one imposed by the human. This transition should take care about the imperfections, which are responsible for the improper operation of the robot, by disconnecting or adapting them to the new situation. Preliminary conclusions from the small-scale experiments are presented.
Immobile Robots: AI in the New Millennium
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Nayak, P. Pandurang
1996-01-01
A new generation of sensor rich, massively distributed, autonomous systems are being developed that have the potential for profound social, environmental, and economic change. These include networked building energy systems, autonomous space probes, chemical plant control systems, satellite constellations for remote ecosystem monitoring, power grids, biosphere-like life support systems, and reconfigurable traffic systems, to highlight but a few. To achieve high performance, these immobile robots (or immobots) will need to develop sophisticated regulatory and immune systems that accurately and robustly control their complex internal functions. To accomplish this, immobots will exploit a vast nervous system of sensors to model themselves and their environment on a grand scale. They will use these models to dramatically reconfigure themselves in order to survive decades of autonomous operations. Achieving these large scale modeling and configuration tasks will require a tight coupling between the higher level coordination function provided by symbolic reasoning, and the lower level autonomic processes of adaptive estimation and control. To be economically viable they will need to be programmable purely through high level compositional models. Self modeling and self configuration, coordinating autonomic functions through symbolic reasoning, and compositional, model-based programming are the three key elements of a model-based autonomous systems architecture that is taking us into the New Millennium.
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Towards Supervising Remote Dexterous Robots Across Time Delay
NASA Technical Reports Server (NTRS)
Hambuchen, Kimberly; Bluethmann, William; Goza, Michael; Ambrose, Robert; Wheeler, Kevin; Rabe, Ken
2006-01-01
The President s Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling dexterous robots under intermediate time delay is presented, in which software running within a ground control cockpit predicts the intention of an immersed robot supervisor, then the remote robot autonomously executes the supervisor s intended tasks. Initial results are presented.
JOMAR: Joint Operations with Mobile Autonomous Robots
2015-12-21
AFRL-AFOSR-JP-TR-2015-0009 JOMAR: Joint Operations with Mobile Autonomous Robots Edwin Olson UNIVERSITY OF MICHIGAN Final Report 12/21/2015...SUBTITLE JOMAR: Joint Operations with Mobile Autonomous Robots 5a. CONTRACT NUMBER FA23861114024 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...14. ABSTRACT Under this grant, we formulated and implemented a variety of novel algorithms that address core problems in multi- robot systems. These
NASA Astrophysics Data System (ADS)
Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois
2010-01-01
After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.
Autonomous Dome for a Robotic Telescope
NASA Astrophysics Data System (ADS)
Kumar, A.; Sengupta, A.; Ganesh, S.
2016-12-01
The Physical Research Laboratory operates a 50 cm robotic observatory at Mount Abu (Rajsthan, India). This Automated Telescope for Variability Studies (ATVS) makes use of the Remote Telescope System 2 (RTS2) for autonomous operations. The observatory uses a 3.5 m dome from Sirius Observatories. We have developed electronics using Arduino electronic circuit boards with home grown logic and software to control the dome operations. We are in the process of completing the drivers to link our Arduino based dome controller with RTS2. This document is a short description of the various phases of the development and their integration to achieve the required objective.
Navigation of a care and welfare robot
NASA Astrophysics Data System (ADS)
Yukawa, Toshihiro; Hosoya, Osamu; Saito, Naoki; Okano, Hideharu
2005-12-01
In this paper, we propose the development of a robot that can perform nursing tasks in a hospital. In a narrow environment such as a sickroom or a hallway, the robot must be able to move freely in arbitrary directions. Therefore, the robot needs to have high controllability and the capability to make precise movements. Our robot can recognize a line by using cameras, and can be controlled in the reference directions by means of comparison with original cell map information; furthermore, it moves safely on the basis of an original center-line established permanently in the building. Correspondence between the robot and a centralized control center enables the robot's autonomous movement in the hospital. Through a navigation system using cell map information, the robot is able to perform nursing tasks smoothly by changing the camera angle.
NASA Astrophysics Data System (ADS)
Butail, Sachit; Polverino, Giovanni; Phamduy, Paul; Del Sette, Fausto; Porfiri, Maurizio
2014-03-01
We explore fish-robot interactions in a comprehensive set of experiments designed to highlight the effects of speed and configuration of bioinspired robots on live zebrafish. The robot design and movement is inspired by salient features of attraction in zebrafish and includes enhanced coloration, aspect ratio of a fertile female, and carangiform/subcarangiformlocomotion. The robots are autonomously controlled to swim in circular trajectories in the presence of live fish. Our results indicate that robot configuration significantly affects both the fish distance to the robots and the time spent near them.
Modeling and Classifying Six-Dimensional Trajectories for Teleoperation Under a Time Delay
NASA Technical Reports Server (NTRS)
SunSpiral, Vytas; Wheeler, Kevin R.; Allan, Mark B.; Martin, Rodney
2006-01-01
Within the context of teleoperating the JSC Robonaut humanoid robot under 2-10 second time delays, this paper explores the technical problem of modeling and classifying human motions represented as six-dimensional (position and orientation) trajectories. A dual path research agenda is reviewed which explored both deterministic approaches and stochastic approaches using Hidden Markov Models. Finally, recent results are shown from a new model which represents the fusion of these two research paths. Questions are also raised about the possibility of automatically generating autonomous actions by reusing the same predictive models of human behavior to be the source of autonomous control. This approach changes the role of teleoperation from being a stand-in for autonomy into the first data collection step for developing generative models capable of autonomous control of the robot.
A Mobile Robot for Small Object Handling
NASA Astrophysics Data System (ADS)
Fišer, Ondřej; Szűcsová, Hana; Grimmer, Vladimír; Popelka, Jan; Vonásek, Vojtěch; Krajník, Tomáš; Chudoba, Jan
The aim of this paper is to present an intelligent autonomous robot capable of small object manipulation. The design of the robot is influenced mainly by the rules of EUROBOT 09 competition. In this challenge, two robots pick up objects scattered on a planar rectangular playfield and use these elements to build models of Hellenistic temples. This paper describes the robot hardware, i.e. electro-mechanics of the drive, chassis and manipulator, as well as the software, i.e. localization, collision avoidance, motion control and planning algorithms.
Integrated mobile robot control
NASA Technical Reports Server (NTRS)
Amidi, Omead; Thorpe, Charles
1991-01-01
This paper describes the structure, implementation, and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation, path specification and tracking, human interfaces, fast communication, and multiple client support. The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Navlab autonomous vehicle. In addition, performance results from positioning and tracking systems are reported and analyzed.
Integrated mobile robot control
NASA Astrophysics Data System (ADS)
Amidi, Omead; Thorpe, Chuck E.
1991-03-01
This paper describes the strucwre implementation and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation path specification and hacking human interfaces fast communication and multiple client support The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Naviab autonomous vehicle. In addition performance results from positioning and tracking systems are reported and analyzed.
Mergeable nervous systems for robots.
Mathews, Nithin; Christensen, Anders Lyhne; O'Grady, Rehan; Mondada, Francesco; Dorigo, Marco
2017-09-12
Robots have the potential to display a higher degree of lifetime morphological adaptation than natural organisms. By adopting a modular approach, robots with different capabilities, shapes, and sizes could, in theory, construct and reconfigure themselves as required. However, current modular robots have only been able to display a limited range of hardwired behaviors because they rely solely on distributed control. Here, we present robots whose bodies and control systems can merge to form entirely new robots that retain full sensorimotor control. Our control paradigm enables robots to exhibit properties that go beyond those of any existing machine or of any biological organism: the robots we present can merge to form larger bodies with a single centralized controller, split into separate bodies with independent controllers, and self-heal by removing or replacing malfunctioning body parts. This work takes us closer to robots that can autonomously change their size, form and function.Robots that can self-assemble into different morphologies are desired to perform tasks that require different physical capabilities. Mathews et al. design robots whose bodies and control systems can merge and split to form new robots that retain full sensorimotor control and act as a single entity.
Intelligent mobility research for robotic locomotion in complex terrain
NASA Astrophysics Data System (ADS)
Trentini, Michael; Beckman, Blake; Digney, Bruce; Vincent, Isabelle; Ricard, Benoit
2006-05-01
The objective of the Autonomous Intelligent Systems Section of Defence R&D Canada - Suffield is best described by its mission statement, which is "to augment soldiers and combat systems by developing and demonstrating practical, cost effective, autonomous intelligent systems capable of completing military missions in complex operating environments." The mobility requirement for ground-based mobile systems operating in urban settings must increase significantly if robotic technology is to augment human efforts in these roles and environments. The intelligence required for autonomous systems to operate in complex environments demands advances in many fields of robotics. This has resulted in large bodies of research in areas of perception, world representation, and navigation, but the problem of locomotion in complex terrain has largely been ignored. In order to achieve its objective, the Autonomous Intelligent Systems Section is pursuing research that explores the use of intelligent mobility algorithms designed to improve robot mobility. Intelligent mobility uses sensing, control, and learning algorithms to extract measured variables from the world, control vehicle dynamics, and learn by experience. These algorithms seek to exploit available world representations of the environment and the inherent dexterity of the robot to allow the vehicle to interact with its surroundings and produce locomotion in complex terrain. The primary focus of the paper is to present the intelligent mobility research within the framework of the research methodology, plan and direction defined at Defence R&D Canada - Suffield. It discusses the progress and future direction of intelligent mobility research and presents the research tools, topics, and plans to address this critical research gap. This research will create effective intelligence to improve the mobility of ground-based mobile systems operating in urban settings to assist the Canadian Forces in their future urban operations.
Sensory Motor Coordination in Robonaut
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II
2003-01-01
As a participant of the year 2000 NASA Summer Faculty Fellowship Program, I worked with the engineers of the Dexterous Robotics Laboratory at NASA Johnson Space Center on the Robonaut project. The Robonaut is an articulated torso with two dexterous arms, left and right five-fingered hands, and a head with cameras mounted on an articulated neck. This advanced space robot, now driven only teleoperatively using VR gloves, sensors and helmets, is to be upgraded to a thinking system that can find, interact with and assist humans autonomously, allowing the Crew to work with Robonaut as a (junior) member of their team. Thus, the work performed this summer was toward the goal of enabling Robonaut to operate autonomously as an intelligent assistant to astronauts. Our underlying hypothesis is that a robot can develop intelligence if it learns a set of basic behaviors (i.e., reflexes - actions tightly coupled to sensing) and through experience learns how to sequence these to solve problems or to accomplish higher-level tasks. We describe our approach to the automatic acquisition of basic behaviors as learning sensory-motor coordination (SMC). Although research in the ontogenesis of animals development from the time of conception) supports the approach of learning SMC as the foundation for intelligent, autonomous behavior, we do not know whether it will prove viable for the development of autonomy in robots. The first step in testing the hypothesis is to determine if SMC can be learned by the robot. To do this, we have taken advantage of Robonaut's teleoperated control system. When a person teleoperates Robonaut, the person's own SMC causes the robot to act purposefully. If the sensory signals that the robot detects during teleoperation are recorded over several repetitions of the same task, it should be possible through signal analysis to identify the sensory-motor couplings that accompany purposeful motion. In this report, reasons for suspecting SMC as the basis for intelligent behavior will be reviewed. A robot control system for autonomous behavior that uses learned SMC will be proposed. Techniques for the extraction of salient parameters from sensory and motor data will be discussed. Experiments with Robonaut will be discussed and preliminary data presented.
Artificial consciousness, artificial emotions, and autonomous robots.
Cardon, Alain
2006-12-01
Nowadays for robots, the notion of behavior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a very cultural concept, founding the main property of human beings, according to themselves. We propose to develop a computable transposition of the consciousness concepts into artificial brains, able to express emotions and consciousness facts. The production of such artificial brains allows the intentional and really adaptive behavior for the autonomous robots. Such a system managing the robot's behavior will be made of two parts: the first one computes and generates, in a constructivist manner, a representation for the robot moving in its environment, and using symbols and concepts. The other part achieves the representation of the previous one using morphologies in a dynamic geometrical way. The robot's body will be seen for itself as the morphologic apprehension of its material substrata. The model goes strictly by the notion of massive multi-agent's organizations with a morphologic control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thangavelautham, Jekanthan; Smith, Alexander; Abu El Samid, Nader
Automation of site preparation and resource utilization on the Moon with teams of autonomous robots holds considerable promise for establishing a lunar base. Such multirobot autonomous systems would require limited human support infrastructure, complement necessary manned operations and reduce overall mission risk. We present an Artificial Neural Tissue (ANT) architecture as a control system for autonomous multirobot excavation tasks. An ANT approach requires much less human supervision and pre-programmed human expertise than previous techniques. Only a single global fitness function and a set of allowable basis behaviors need be specified. An evolutionary (Darwinian) selection process is used to 'breed' controllersmore » for the task at hand in simulation and the fittest controllers are transferred onto hardware for further validation and testing. ANT facilitates 'machine creativity', with the emergence of novel functionality through a process of self-organized task decomposition of mission goals. ANT based controllers are shown to exhibit self-organization, employ stigmergy (communication mediated through the environment) and make use of templates (unlabeled environmental cues). With lunar in-situ resource utilization (ISRU) efforts in mind, ANT controllers have been tested on a multirobot excavation task in which teams of robots with no explicit supervision can successfully avoid obstacles, interpret excavation blueprints, perform layered digging, avoid burying or trapping other robots and clear/maintain digging routes.« less
Sample Return Robot Centennial Challenge
2012-06-16
A judge for the NASA-WPI Sample Return Robot Centennial Challenge follows a robot on the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks
NASA Technical Reports Server (NTRS)
Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia
2017-01-01
Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.
ODYSSEUS autonomous walking robot: The leg/arm design
NASA Technical Reports Server (NTRS)
Bourbakis, N. G.; Maas, M.; Tascillo, A.; Vandewinckel, C.
1994-01-01
ODYSSEUS is an autonomous walking robot, which makes use of three wheels and three legs for its movement in the free navigation space. More specifically, it makes use of its autonomous wheels to move around in an environment where the surface is smooth and not uneven. However, in the case that there are small height obstacles, stairs, or small height unevenness in the navigation environment, the robot makes use of both wheels and legs to travel efficiently. In this paper we present the detailed hardware design and the simulated behavior of the extended leg/arm part of the robot, since it plays a very significant role in the robot actions (movements, selection of objects, etc.). In particular, the leg/arm consists of three major parts: The first part is a pipe attached to the robot base with a flexible 3-D joint. This pipe has a rotated bar as an extended part, which terminates in a 3-D flexible joint. The second part of the leg/arm is also a pipe similar to the first. The extended bar of the second part ends at a 2-D joint. The last part of the leg/arm is a clip-hand. It is used for selecting several small weight and size objects, and when it is in a 'closed' mode, it is used as a supporting part of the robot leg. The entire leg/arm part is controlled and synchronized by a microcontroller (68CH11) attached to the robot base.
Explanation Capabilities for Behavior-Based Robot Control
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L.
2012-01-01
A recent study that evaluated issues associated with remote interaction with an autonomous vehicle within the framework of grounding found that missing contextual information led to uncertainty in the interpretation of collected data, and so introduced errors into the command logic of the vehicle. As the vehicles became more autonomous through the activation of additional capabilities, more errors were made. This is an inefficient use of the platform, since the behavior of remotely located autonomous vehicles didn't coincide with the "mental models" of human operators. One of the conclusions of the study was that there should be a way for the autonomous vehicles to describe what action they choose and why. Robotic agents with enough self-awareness to dynamically adjust the information conveyed back to the Operations Center based on a detail level component analysis of requests could provide this description capability. One way to accomplish this is to map the behavior base of the robot into a formal mathematical framework called a cost-calculus. A cost-calculus uses composition operators to build up sequences of behaviors that can then be compared to what is observed using well-known inference mechanisms.
Mobile Robot Designed with Autonomous Navigation System
NASA Astrophysics Data System (ADS)
An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin
2017-10-01
With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.
1992-10-29
These people try to make their robotic vehicle as intelligent and autonomous as possible with the current state of technology. The robot only interacts... Robotics Peter J. Burt David Sarnoff Research Center Princeton, NJ 08543-5300 U.S.A. The ability of an operator to drive a remotely piloted vehicle depends...RESUPPLY - System which can rapidly and autonomously load and unload palletized ammunition. (18) AUTONOMOUS COMBAT EVACUATION VEHICLE - Robotic arms
2017-06-01
FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...June 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW TECHNIQUE FOR ROBOT VISION IN AUTONOMOUS UNDERWATER...Developing a technique for underwater robot vision is a key factor in establishing autonomy in underwater vehicles. A new technique is developed and
Control of complex physically simulated robot groups
NASA Astrophysics Data System (ADS)
Brogan, David C.
2001-10-01
Actuated systems such as robots take many forms and sizes but each requires solving the difficult task of utilizing available control inputs to accomplish desired system performance. Coordinated groups of robots provide the opportunity to accomplish more complex tasks, to adapt to changing environmental conditions, and to survive individual failures. Similarly, groups of simulated robots, represented as graphical characters, can test the design of experimental scenarios and provide autonomous interactive counterparts for video games. The complexity of writing control algorithms for these groups currently hinders their use. A combination of biologically inspired heuristics, search strategies, and optimization techniques serve to reduce the complexity of controlling these real and simulated characters and to provide computationally feasible solutions.
Autonomous navigation method for substation inspection robot based on travelling deviation
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting
2017-06-01
A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.
Soft Ultrathin Electronics Innervated Adaptive Fully Soft Robots.
Wang, Chengjun; Sim, Kyoseung; Chen, Jin; Kim, Hojin; Rao, Zhoulyu; Li, Yuhang; Chen, Weiqiu; Song, Jizhou; Verduzco, Rafael; Yu, Cunjiang
2018-03-01
Soft robots outperform the conventional hard robots on significantly enhanced safety, adaptability, and complex motions. The development of fully soft robots, especially fully from smart soft materials to mimic soft animals, is still nascent. In addition, to date, existing soft robots cannot adapt themselves to the surrounding environment, i.e., sensing and adaptive motion or response, like animals. Here, compliant ultrathin sensing and actuating electronics innervated fully soft robots that can sense the environment and perform soft bodied crawling adaptively, mimicking an inchworm, are reported. The soft robots are constructed with actuators of open-mesh shaped ultrathin deformable heaters, sensors of single-crystal Si optoelectronic photodetectors, and thermally responsive artificial muscle of carbon-black-doped liquid-crystal elastomer (LCE-CB) nanocomposite. The results demonstrate that adaptive crawling locomotion can be realized through the conjugation of sensing and actuation, where the sensors sense the environment and actuators respond correspondingly to control the locomotion autonomously through regulating the deformation of LCE-CB bimorphs and the locomotion of the robots. The strategy of innervating soft sensing and actuating electronics with artificial muscles paves the way for the development of smart autonomous soft robots. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Current challenges in autonomous vehicle development
NASA Astrophysics Data System (ADS)
Connelly, J.; Hong, W. S.; Mahoney, R. B., Jr.; Sparrow, D. A.
2006-05-01
The field of autonomous vehicles is a rapidly growing one, with significant interest from both government and industry sectors. Autonomous vehicles represent the intersection of artificial intelligence (AI) and robotics, combining decision-making with real-time control. Autonomous vehicles are desired for use in search and rescue, urban reconnaissance, mine detonation, supply convoys, and more. The general adage is to use robots for anything dull, dirty, dangerous or dumb. While a great deal of research has been done on autonomous systems, there are only a handful of fielded examples incorporating machine autonomy beyond the level of teleoperation, especially in outdoor/complex environments. In an attempt to assess and understand the current state of the art in autonomous vehicle development, a few areas where unsolved problems remain became clear. This paper outlines those areas and provides suggestions for the focus of science and technology research. The first step in evaluating the current state of autonomous vehicle development was to develop a definition of autonomy. A number of autonomy level classification systems were reviewed. The resulting working definitions and classification schemes used by the authors are summarized in the opening sections of the paper. The remainder of the report discusses current approaches and challenges in decision-making and real-time control for autonomous vehicles. Suggested research focus areas for near-, mid-, and long-term development are also presented.
Trusted Remote Operation of Proximate Emergy Robots (TROOPER): DARPA Robotics Challenge
2015-12-01
sensor in each of the robot’s feet. Additionally, there is a 6-axis IMU that sits in the robot’s pelvis cage. While testing before the Finals, the...Services. Many of the controllers in the autonomic layer have overlapping requirements, such as filtered IMU and force torque data from the robot...the following services during the DRC: • IMU Filtering • Force Torque Filtering • Joint State Publishing • TF (Transform) Broadcasting • Robot Pose
Trusted Remote Operation of Proximate Emergency Robots (TROOPER): DARPA Robotics Challenge
2015-12-01
sensor in each of the robot’s feet. Additionally, there is a 6-axis IMU that sits in the robot’s pelvis cage. While testing before the Finals, the...Services. Many of the controllers in the autonomic layer have overlapping requirements, such as filtered IMU and force torque data from the robot...the following services during the DRC: • IMU Filtering • Force Torque Filtering • Joint State Publishing • TF (Transform) Broadcasting • Robot Pose
Precharged Pneumatic Soft Actuators and Their Applications to Untethered Soft Robots.
Li, Yunquan; Chen, Yonghua; Ren, Tao; Li, Yingtian; Choi, Shiu Hong
2018-06-20
The past decade has witnessed tremendous progress in soft robotics. Unlike most pneumatic-based methods, we present a new approach to soft robot design based on precharged pneumatics (PCP). We propose a PCP soft bending actuator, which is actuated by precharged air pressure and retracted by inextensible tendons. By pulling or releasing the tendons, the air pressure in the soft actuator is modulated, and hence, its bending angle. The tendons serve in a way similar to pressure-regulating valves that are used in typical pneumatic systems. The linear motion of tendons is transduced into complex motion via the prepressurized bent soft actuator. Furthermore, since a PCP actuator does not need any gas supply, complicated pneumatic control systems used in traditional soft robotics are eliminated. This facilitates the development of compact untethered autonomous soft robots for various applications. Both theoretical modeling and experimental validation have been conducted on a sample PCP soft actuator design. A fully untethered autonomous quadrupedal soft robot and a soft gripper have been developed to demonstrate the superiority of the proposed approach over traditional pneumatic-driven soft robots.
Human-Robot Control Strategies for the NASA/DARPA Robonaut
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.
2003-01-01
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.
NASA Astrophysics Data System (ADS)
Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki
We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.
Tele-assistance for semi-autonomous robots
NASA Technical Reports Server (NTRS)
Rogers, Erika; Murphy, Robin R.
1994-01-01
This paper describes a new approach in semi-autonomous mobile robots. In this approach the robot has sufficient computerized intelligence to function autonomously under a certain set of conditions, while the local system is a cooperative decision making unit that combines human and machine intelligence. Communication is then allowed to take place in a common mode and in a common language. A number of exception-handling scenarios that were constructed as a result of experiments with actual sensor data collected from two mobile robots were presented.
NASA Astrophysics Data System (ADS)
Endo, Yoichiro; Balloch, Jonathan C.; Grushin, Alexander; Lee, Mun Wai; Handelman, David
2016-05-01
Control of current tactical unmanned ground vehicles (UGVs) is typically accomplished through two alternative modes of operation, namely, low-level manual control using joysticks and high-level planning-based autonomous control. Each mode has its own merits as well as inherent mission-critical disadvantages. Low-level joystick control is vulnerable to communication delay and degradation, and high-level navigation often depends on uninterrupted GPS signals and/or energy-emissive (non-stealth) range sensors such as LIDAR for localization and mapping. To address these problems, we have developed a mid-level control technique where the operator semi-autonomously drives the robot relative to visible landmarks that are commonly recognizable by both humans and machines such as closed contours and structured lines. Our novel solution relies solely on optical and non-optical passive sensors and can be operated under GPS-denied, communication-degraded environments. To control the robot using these landmarks, we developed an interactive graphical user interface (GUI) that allows the operator to select landmarks in the robot's view and direct the robot relative to one or more of the landmarks. The integrated UGV control system was evaluated based on its ability to robustly navigate through indoor environments. The system was successfully field tested with QinetiQ North America's TALON UGV and Tactical Robot Controller (TRC), a ruggedized operator control unit (OCU). We found that the proposed system is indeed robust against communication delay and degradation, and provides the operator with steady and reliable control of the UGV in realistic tactical scenarios.
Sample Return Robot Centennial Challenge
2012-06-15
University of Waterloo (Canada) Robotics Team members test their robot on the practice field one day prior to the NASA-WPI Sample Return Robot Centennial Challenge, Friday, June 15, 2012 at the Worcester Polytechnic Institute in Worcester, Mass. Teams will compete for a $1.5 million NASA prize to build an autonomous robot that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-14
A University of Waterloo Robotics Team member tests their robot on the practice field two days prior to the NASA-WPI Sample Return Robot Centennial Challenge, Thursday, June 14, 2012 at the Worcester Polytechnic Institute in Worcester, Mass. Teams will compete for a $1.5 million NASA prize to build an autonomous robot that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Experiences with the JPL telerobot testbed: Issues and insights
NASA Technical Reports Server (NTRS)
Stone, Henry W.; Balaram, Bob; Beahan, John
1989-01-01
The Jet Propulsion Laboratory's (JPL) Telerobot Testbed is an integrated robotic testbed used to develop, implement, and evaluate the performance of advanced concepts in autonomous, tele-autonomous, and tele-operated control of robotic manipulators. Using the Telerobot Testbed, researchers demonstrated several of the capabilities and technological advances in the control and integration of robotic systems which have been under development at JPL for several years. In particular, the Telerobot Testbed was recently employed to perform a near completely automated, end-to-end, satellite grapple and repair sequence. The task of integrating existing as well as new concepts in robot control into the Telerobot Testbed has been a very difficult and timely one. Now that researchers have completed the first major milestone (i.e., the end-to-end demonstration) it is important to reflect back upon experiences and to collect the knowledge that has been gained so that improvements can be made to the existing system. It is also believed that the experiences are of value to the others in the robotics community. Therefore, the primary objective here will be to use the Telerobot Testbed as a case study to identify real problems and technological gaps which exist in the areas of robotics and in particular systems integration. Such problems have surely hindered the development of what could be reasonably called an intelligent robot. In addition to identifying such problems, researchers briefly discuss what approaches have been taken to resolve them or, in several cases, to circumvent them until better approaches can be developed.
Autonomous surgical robotics using 3-D ultrasound guidance: feasibility study.
Whitman, John; Fronheiser, Matthew P; Ivancevich, Nikolas M; Smith, Stephen W
2007-10-01
The goal of this study was to test the feasibility of using a real-time 3D (RT3D) ultrasound scanner with a transthoracic matrix array transducer probe to guide an autonomous surgical robot. Employing a fiducial alignment mark on the transducer to orient the robot's frame of reference and using simple thresholding algorithms to segment the 3D images, we tested the accuracy of using the scanner to automatically direct a robot arm that touched two needle tips together within a water tank. RMS measurement error was 3.8% or 1.58 mm for an average path length of 41 mm. Using these same techniques, the autonomous robot also performed simulated needle biopsies of a cyst-like lesion in a tissue phantom. This feasibility study shows the potential for 3D ultrasound guidance of an autonomous surgical robot for simple interventional tasks, including lesion biopsy and foreign body removal.
Controlling Herds of Cooperative Robots
NASA Technical Reports Server (NTRS)
Quadrelli, Marco B.
2006-01-01
A document poses, and suggests a program of research for answering, questions of how to achieve autonomous operation of herds of cooperative robots to be used in exploration and/or colonization of remote planets. In a typical scenario, a flock of mobile sensory robots would be deployed in a previously unexplored region, one of the robots would be designated the leader, and the leader would issue commands to move the robots to different locations or aim sensors at different targets to maximize scientific return. It would be necessary to provide for this hierarchical, cooperative behavior even in the face of such unpredictable factors as terrain obstacles. A potential-fields approach is proposed as a theoretical basis for developing methods of autonomous command and guidance of a herd. A survival-of-the-fittest approach is suggested as a theoretical basis for selection, mutation, and adaptation of a description of (1) the body, joints, sensors, actuators, and control computer of each robot, and (2) the connectivity of each robot with the rest of the herd, such that the herd could be regarded as consisting of a set of artificial creatures that evolve to adapt to a previously unknown environment. A distributed simulation environment has been developed to test the proposed approaches in the Titan environment. One blimp guides three surface sondes via a potential field approach. The results of the simulation demonstrate that the method used for control is feasible, even if significant uncertainty exists in the dynamics and environmental models, and that the control architecture provides the autonomy needed to enable surface science data collection.
Sample Return Robot Centennial Challenge
2012-06-15
Intrepid Systems robot, foreground, and the University of Waterloo (Canada) robot, take to the practice field on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Robot teams will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Development of a mobile robot for the 1995 AUVS competition
NASA Astrophysics Data System (ADS)
Matthews, Bradley O.; Ruthemeyer, Michael A.; Perdue, David; Hall, Ernest L.
1995-12-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors systems. The speed and steering control are supervised by a 486 computer through a 3-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. The is micro-controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system, where even computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected through a commercial tracking device, communicating with the computer the X,Y coordinates of the lane marker. Testing of these systems yielded positive results by showing that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous controller applicable for any mobile vehicle with only minor adaptations.
SAMURAI: Polar AUV-Based Autonomous Dexterous Sampling
NASA Astrophysics Data System (ADS)
Akin, D. L.; Roberts, B. J.; Smith, W.; Roderick, S.; Reves-Sohn, R.; Singh, H.
2006-12-01
While autonomous undersea vehicles are increasingly being used for surveying and mapping missions, as of yet there has been little concerted effort to create a system capable of performing physical sampling or other manipulation of the local environment. This type of activity has typically been performed under teleoperated control from ROVs, which provides high-bandwidth real-time human direction of the manipulation activities. Manipulation from an AUV will require a completely autonomous sampling system, which implies both advanced technologies such as machine vision and autonomous target designation, but also dexterous robot manipulators to perform the actual sampling without human intervention. As part of the NASA Astrobiology Science and Technology for Exploring the Planets (ASTEP) program, the University of Maryland Space Systems Laboratory has been adapting and extending robotics technologies developed for spacecraft assembly and maintenance to the problem of autonomous sampling of biologicals and soil samples around hydrothermal vents. The Sub-polar ice Advanced Manipulator for Universal Sampling and Autonomous Intervention (SAMURAI) system is comprised of a 6000-meter capable six-degree-of-freedom dexterous manipulator, along with an autonomous vision system, multi-level control system, and sampling end effectors and storage mechanisms to allow collection of samples from vent fields. SAMURAI will be integrated onto the Woods Hole Oceanographic Institute (WHOI) Jaguar AUV, and used in Arctic during the fall of 2007 for autonomous vent field sampling on the Gakkel Ridge. Under the current operations concept, the JAGUAR and PUMA AUVs will survey the water column and localize on hydrothermal vents. Early mapping missions will create photomosaics of the vents and local surroundings, allowing scientists on the mission to designate desirable sampling targets. Based on physical characteristics such as size, shape, and coloration, the targets will be loaded into the SAMURAI control system, and JAGUAR (with SAMURAI mounted to the lower forward hull) will return to the designated target areas. Once on site, vehicle control will be turned over to the SAMURAI controller, which will perform vision-based guidance to the sampling site and will then ground the AUV to the sea bottom for stability. The SAMURAI manipulator will collect samples, such as sessile biologicals, geological samples, and (potentially) vent fluids, and store the samples for the return trip. After several hours of sampling operations on one or several sites, JAGUAR control will be returned to the WHOI onboard controller for the return to the support ship. (Operational details of AUV operations on the Gakkel Ridge mission are presented in other papers at this conference.) Between sorties, SAMURAI end effectors can be changed out on the surface for specific targets, such as push cores or larger biologicals such as tube worms. In addition to the obvious challenges in autonomous vision-based manipulator control from a free-flying support vehicle, significant development challenges have been the design of a highly capable robotic arm within the mass limitations (both wet and dry) of the JAGUAR vehicle, the development of a highly robust manipulator with modular maintenance units for extended polar operations, and the creation of a robot-based sample collection and holding system for multiple heterogeneous samples on a single extended sortie.
NASA Astrophysics Data System (ADS)
Murata, Naoya; Katsura, Seiichiro
Acquisition of information about the environment around a mobile robot is important for purposes such as controlling the robot from a remote location and in situations such as that when the robot is running autonomously. In many researches, audiovisual information is used. However, acquisition of information about force sensation, which is included in environmental information, has not been well researched. The mobile-hapto, which is a remote control system with force information, has been proposed, but the robot used for the system can acquire only the horizontal component of forces. For this reason, in this research, a three-wheeled mobile robot that consists of seven actuators was developed and its control system was constructed. It can get information on horizontal and vertical forces without using force sensors. By using this robot, detailed information on the forces in the environment can be acquired and the operability of the robot and its capability to adjust to the environment are expected to improve.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quintana, John P.
This paper reports on the progress toward creating semi-autonomous motion control platforms for beamline applications using the iRobot Create registered platform. The goal is to create beamline research instrumentation where the motion paths are based on the local environment rather than position commanded from a control system, have low integration costs and also be scalable and easily maintainable.
Mathematical Modeling Of The Terrain Around A Robot
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1992-01-01
In conceptual system for modeling of terrain around autonomous mobile robot, representation of terrain used for control separated from representation provided by sensors. Concept takes motion-planning system out from under constraints imposed by discrete spatial intervals of square terrain grid(s). Separation allows sensing and motion-controlling systems to operate asynchronously; facilitating integration of new map and sensor data into planning of motions.
A neural network-based exploratory learning and motor planning system for co-robots
Galbraith, Byron V.; Guenther, Frank H.; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or “learning by doing,” an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object. PMID:26257640
A neural network-based exploratory learning and motor planning system for co-robots.
Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.
Variants of guided self-organization for robot control.
Martius, Georg; Herrmann, J Michael
2012-09-01
Autonomous robots can generate exploratory behavior by self-organization of the sensorimotor loop. We show that the behavioral manifold that is covered in this way can be modified in a goal-dependent way without reducing the self-induced activity of the robot. We present three strategies for guided self-organization, namely by using external rewards, a problem-specific error function, or assumptions about the symmetries of the desired behavior. The strategies are analyzed for two different robots in a physically realistic simulation.
Intelligent Autonomy for Unmanned Surface and Underwater Vehicles
NASA Technical Reports Server (NTRS)
Huntsberger, Terry; Woodward, Gail
2011-01-01
As the Autonomous Underwater Vehicle (AUV) and Autonomous Surface Vehicle (ASV) platforms mature in endurance and reliability, a natural evolution will occur towards longer, more remote autonomous missions. This evolution will require the development of key capabilities that allow these robotic systems to perform a high level of on-board decisionmaking, which would otherwise be performed by humanoperators. With more decision making capabilities, less a priori knowledge of the area of operations would be required, as these systems would be able to sense and adapt to changing environmental conditions, such as unknown topography, currents, obstructions, bays, harbors, islands, and river channels. Existing vehicle sensors would be dual-use; that is they would be utilized for the primary mission, which may be mapping or hydrographic reconnaissance; as well as for autonomous hazard avoidance, route planning, and bathymetric-based navigation. This paper describes a tightly integrated instantiation of an autonomous agent called CARACaS (Control Architecture for Robotic Agent Command and Sensing) developed at JPL (Jet Propulsion Laboratory) that was designed to address many of the issues for survivable ASV/AUV control and to provide adaptive mission capabilities. The results of some on-water tests with US Navy technology test platforms are also presented.
Telerobot local-remote control architecture for space flight program applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul; Steele, Robert; Long, Mark; Bon, Bruce; Beahan, John
1993-01-01
The JPL Supervisory Telerobotics (STELER) Laboratory has developed and demonstrated a unique local-remote robot control architecture which enables management of intermittent communication bus latencies and delays such as those expected for ground-remote operation of Space Station robotic systems via the Tracking and Data Relay Satellite System (TDRSS) communication platform. The current work at JPL in this area has focused on enhancing the technologies and transferring the control architecture to hardware and software environments which are more compatible with projected ground and space operational environments. At the local site, the operator updates the remote worksite model using stereo video and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. This capability runs on a single Silicon Graphics Inc. machine. The operator can employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the intended object. The remote site controller, called the Modular Telerobot Task Execution System (MOTES), runs in a multi-processor VME environment and performs the task sequencing, task execution, trajectory generation, closed loop force/torque control, task parameter monitoring, and reflex action. This paper describes the new STELER architecture implementation, and also documents the results of the recent autonomous docking task execution using the local site and MOTES.
Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph
2017-09-26
Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.
Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements
Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.
2012-01-01
This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962
Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.
Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M
2012-01-01
This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
A three-finger multisensory hand for dexterous space robotic tasks
NASA Technical Reports Server (NTRS)
Murase, Yuichi; Komada, Satoru; Uchiyama, Takashi; Machida, Kazuo; Akita, Kenzo
1994-01-01
The National Space Development Agency of Japan will launch ETS-7 in 1997, as a test bed for next generation space technology of RV&D and space robot. MITI has been developing a three-finger multisensory hand for complex space robotic tasks. The hand can be operated under remote control or autonomously. This paper describes the design and development of the hand and the performance of a breadboard model.
Sample Return Robot Centennial Challenge
2012-06-16
NASA Deputy Administrator Lori Garver, left, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
NASA Deputy Administrator Lori Garver, right, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
NASA Astrophysics Data System (ADS)
Fornas, D.; Sales, J.; Peñalver, A.; Pérez, J.; Fernández, J. J.; Marín, R.; Sanz, P. J.
2016-03-01
This article presents research on the subject of autonomous underwater robot manipulation. Ongoing research in underwater robotics intends to increase the autonomy of intervention operations that require physical interaction in order to achieve social benefits in fields such as archaeology or biology that cannot afford the expenses of costly underwater operations using remote operated vehicles. Autonomous grasping is still a very challenging skill, especially in underwater environments, with highly unstructured scenarios, limited availability of sensors and adverse conditions that affect the robot perception and control systems. To tackle these issues, we propose the use of vision and segmentation techniques that aim to improve the specification of grasping operations on underwater primitive shaped objects. Several sources of stereo information are used to gather 3D information in order to obtain a model of the object. Using a RANSAC segmentation algorithm, the model parameters are estimated and a set of feasible grasps are computed. This approach is validated in both simulated and real underwater scenarios.
Mobile app for human-interaction with sitter robots
NASA Astrophysics Data System (ADS)
Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.
2017-05-01
Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that enable a patient sitter HMI, and we include experimental results with a small number of users that demonstrate that the concept is sound and scalable.
An Intention-Driven Semi-autonomous Intelligent Robotic System for Drinking.
Zhang, Zhijun; Huang, Yongqian; Chen, Siyuan; Qu, Jun; Pan, Xin; Yu, Tianyou; Li, Yuanqing
2017-01-01
In this study, an intention-driven semi-autonomous intelligent robotic (ID-SIR) system is designed and developed to assist the severely disabled patients to live independently. The system mainly consists of a non-invasive brain-machine interface (BMI) subsystem, a robot manipulator and a visual detection and localization subsystem. Different from most of the existing systems remotely controlled by joystick, head- or eye tracking, the proposed ID-SIR system directly acquires the intention from users' brain. Compared with the state-of-art system only working for a specific object in a fixed place, the designed ID-SIR system can grasp any desired object in a random place chosen by a user and deliver it to his/her mouth automatically. As one of the main advantages of the ID-SIR system, the patient is only required to send one intention command for one drinking task and the autonomous robot would finish the rest of specific controlling tasks, which greatly eases the burden on patients. Eight healthy subjects attended our experiment, which contained 10 tasks for each subject. In each task, the proposed ID-SIR system delivered the desired beverage container to the mouth of the subject and then put it back to the original position. The mean accuracy of the eight subjects was 97.5%, which demonstrated the effectiveness of the ID-SIR system.
Sample Return Robot Centennial Challenge
2012-06-15
Intrepid Systems robot "MXR - Mark's Exploration Robot" takes to the practice field and tries to capture the white object in the foreground on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Intrepid Systems' robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
Children visiting the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event try to catch basketballs being thrown by a robot from FIRST Robotics at Burncoat High School (Mass.) on Saturday, June 16, 2012 at WPI in Worcester, Mass. The TouchTomorrow event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Autonomous Motion Learning for Intra-Vehicular Activity Space Robot
NASA Astrophysics Data System (ADS)
Watanabe, Yutaka; Yairi, Takehisa; Machida, Kazuo
Space robots will be needed in the future space missions. So far, many types of space robots have been developed, but in particular, Intra-Vehicular Activity (IVA) space robots that support human activities should be developed to reduce human-risks in space. In this paper, we study the motion learning method of an IVA space robot with the multi-link mechanism. The advantage point is that this space robot moves using reaction force of the multi-link mechanism and contact forces from the wall as space walking of an astronaut, not to use a propulsion. The control approach is determined based on a reinforcement learning with the actor-critic algorithm. We demonstrate to clear effectiveness of this approach using a 5-link space robot model by simulation. First, we simulate that a space robot learn the motion control including contact phase in two dimensional case. Next, we simulate that a space robot learn the motion control changing base attitude in three dimensional case.
NASA Technical Reports Server (NTRS)
Montemerlo, Melvin
1988-01-01
The Autonomous Systems focus on the automation of control systems for the Space Station and mission operations. Telerobotics focuses on automation for in-space servicing, assembly, and repair. The Autonomous Systems and Telerobotics each have a planned sequence of integrated demonstrations showing the evolutionary advance of the state-of-the-art. Progress is briefly described for each area of concern.
Design of a Prototype Autonomous Amphibious WHEGS(Trademark) Robot for Surf-Zone Operations
2005-06-01
Control Loop ........................................................................ 9 Figure 7. Physical Layout (without GPS bracket ...12 Figure 8. Side View showing GPS Bracket ........................................................ 13 Figure 9...without GPS bracket ) 13 Figure 8. Side View showing GPS Bracket 1. Body Construction The design of the robot body for this thesis was made to
Obstacle avoidance system with sonar sensing and fuzzy logic
NASA Astrophysics Data System (ADS)
Chiang, Wen-chuan; Kelkar, Nikhal; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of an obstacle avoidance system using sonar sensors for a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. The obstacle avoidance system is based on a micro-controller interfaced with multiple ultrasonic transducers. This micro-controller independently handles all timing and distance calculations and sends a distance measurement back to the computer via the serial line. This design yields a portable independent system. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous obstacle avoidance controller applicable for any mobile vehicle with only minor adaptations.
Spatial abstraction for autonomous robot navigation.
Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon
2015-09-01
Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.
Towards Principled Experimental Study of Autonomous Mobile Robots
NASA Technical Reports Server (NTRS)
Gat, Erann
1995-01-01
We review the current state of research in autonomous mobile robots and conclude that there is an inadequate basis for predicting the reliability and behavior of robots operating in unengineered environments. We present a new approach to the study of autonomous mobile robot performance based on formal statistical analysis of independently reproducible experiments conducted on real robots. Simulators serve as models rather than experimental surrogates. We demonstrate three new results: 1) Two commonly used performance metrics (time and distance) are not as well correlated as is often tacitly assumed. 2) The probability distributions of these performance metrics are exponential rather than normal, and 3) a modular, object-oriented simulation accurately predicts the behavior of the real robot in a statistically significant manner.
Cooperative path following control of multiple nonholonomic mobile robots.
Cao, Ke-Cai; Jiang, Bin; Yue, Dong
2017-11-01
Cooperative path following control problem of multiple nonholonomic mobile robots has been considered in this paper. Based on the framework of decomposition, the cooperative path following problem has been transformed into path following problem and cooperative control problem; Then cascaded theory of non-autonomous system has been employed in the design of controllers without resorting to feedback linearization. One time-varying coordinate transformation based on dilation has been introduced to solve the uncontrollable problem of nonholonomic robots when the whole group's reference converges to stationary point. Cooperative path following controllers for nonholonomic robots have been proposed under persistent reference or reference target that converges to stationary point respectively. Simulation results using Matlab have illustrated the effectiveness of the obtained theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Elfes, Alberto; Podnar, Gregg W.; Dolan, John M.; Stancliff, Stephen; Lin, Ellie; Hosler, Jeffrey C.; Ames, Troy J.; Higinbotham, John; Moisan, John R.; Moisan, Tiffany A.;
2008-01-01
Earth science research must bridge the gap between the atmosphere and the ocean to foster understanding of Earth s climate and ecology. Ocean sensing is typically done with satellites, buoys, and crewed research ships. The limitations of these systems include the fact that satellites are often blocked by cloud cover, and buoys and ships have spatial coverage limitations. This paper describes a multi-robot science exploration software architecture and system called the Telesupervised Adaptive Ocean Sensor Fleet (TAOSF). TAOSF supervises and coordinates a group of robotic boats, the OASIS platforms, to enable in-situ study of phenomena in the ocean/atmosphere interface, as well as on the ocean surface and sub-surface. The OASIS platforms are extended deployment autonomous ocean surface vehicles, whose development is funded separately by the National Oceanic and Atmospheric Administration (NOAA). TAOSF allows a human operator to effectively supervise and coordinate multiple robotic assets using a sliding autonomy control architecture, where the operating mode of the vessels ranges from autonomous control to teleoperated human control. TAOSF increases data-gathering effectiveness and science return while reducing demands on scientists for robotic asset tasking, control, and monitoring. The first field application chosen for TAOSF is the characterization of Harmful Algal Blooms (HABs). We discuss the overall TAOSF architecture, describe field tests conducted under controlled conditions using rhodamine dye as a HAB simulant, present initial results from these tests, and outline the next steps in the development of TAOSF.
Context recognition and situation assessment in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Yavnai, Arie
1993-05-01
The capability to recognize the operating context and to assess the situation in real-time is needed, if a high functionality autonomous mobile robot has to react properly and effectively to continuously changing situations and events, either external or internal, while the robot is performing its assigned tasks. A new approach and architecture for context recognition and situation assessment module (CORSA) is presented in this paper. CORSA is a multi-level information processing module which consists of adaptive decision and classification algorithms. It performs dynamic mapping from the data space to the context space, and dynamically decides on the context class. Learning mechanism is employed to update the decision variables so as to minimize the probability of misclassification. CORSA is embedded within the Mission Manager module of the intelligent autonomous hyper-controller (IAHC) of the mobile robot. The information regarding operating context, events and situation is then communicated to other modules of the IAHC where it is used to: (a) select the appropriate action strategy; (b) support the processes to arbitration and conflict resolution between reflexive behaviors and reasoning-driven behaviors; (c) predict future events and situations; and (d) determine criteria and priorities for planning, replanning, and decision making.
Sample Return Robot Centennial Challenge
2012-06-16
"Harry" a Goldendoodle is seen wearing a NASA backpack during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
Team members of "Survey" drive their robot around the campus on Saturday, June 16, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The Survey team was one of the final teams participating in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas
2013-08-01
This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.
Design of a Vision-Based Sensor for Autonomous Pig House Cleaning
NASA Astrophysics Data System (ADS)
Braithwaite, Ian; Blanke, Mogens; Zhang, Guo-Qiang; Carstensen, Jens Michael
2005-12-01
Current pig house cleaning procedures are hazardous to the health of farm workers, and yet necessary if the spread of disease between batches of animals is to be satisfactorily controlled. Autonomous cleaning using robot technology offers salient benefits. This paper addresses the feasibility of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas with a low probability of misclassification. A Bayesian discriminator is shown to be efficient in this context and implementation of a prototype tool demonstrates the feasibility of designing a low-cost vision-based sensor for autonomous cleaning.
An Analysis of Navigation Algorithms for Smartphones Using J2ME
NASA Astrophysics Data System (ADS)
Santos, André C.; Tarrataca, Luís; Cardoso, João M. P.
Embedded systems are considered one of the most potential areas for future innovations. Two embedded fields that will most certainly take a primary role in future innovations are mobile robotics and mobile computing. Mobile robots and smartphones are growing in number and functionalities, becoming a presence in our daily life. In this paper, we study the current feasibility of a smartphone to execute navigation algorithms. As a test case, we use a smartphone to control an autonomous mobile robot. We tested three navigation problems: Mapping, Localization and Path Planning. For each of these problems, an algorithm has been chosen, developed in J2ME, and tested on the field. Results show the current mobile Java capacity for executing computationally demanding algorithms and reveal the real possibility of using smartphones for autonomous navigation.
Parallel-distributed mobile robot simulator
NASA Astrophysics Data System (ADS)
Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo
1996-06-01
The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.
Combined virtual and real robotic test-bed for single operator control of multiple robots
NASA Astrophysics Data System (ADS)
Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash
2010-04-01
Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.
Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois
2017-07-01
A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.
Sample Return Robot Centennial Challenge
2012-06-15
Wunderkammer Laboratory Team leader Jim Rothrock, left, answers questions from 8th grade Sullivan Middle School (Mass.) students about his robot named "Cerberus" on Friday, June 15, 2012, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Rothrock's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
Self-organization, embodiment, and biologically inspired robotics.
Pfeifer, Rolf; Lungarella, Max; Iida, Fumiya
2007-11-16
Robotics researchers increasingly agree that ideas from biology and self-organization can strongly benefit the design of autonomous robots. Biological organisms have evolved to perform and survive in a world characterized by rapid changes, high uncertainty, indefinite richness, and limited availability of information. Industrial robots, in contrast, operate in highly controlled environments with no or very little uncertainty. Although many challenges remain, concepts from biologically inspired (bio-inspired) robotics will eventually enable researchers to engineer machines for the real world that possess at least some of the desirable properties of biological organisms, such as adaptivity, robustness, versatility, and agility.
2007-06-01
2 D . THESIS ORGANIZATION...c. Validation ................................................................................13 d . XSLT...X3D Earth...........................................................................................16 D . USING AUVW FOR SIMULATION
Sample Return Robot Centennial Challenge
2012-06-16
Intrepid Systems Team member Mark Curry, left, talks with NASA Deputy Administrator Lori Garver and NASA Chief Technologist Mason Peck, right, about his robot named "MXR - Mark's Exploration Robot" on Saturday, June 16, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Curry's robot team was one of the final teams participating in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-15
Intrepid Systems Team member Mark Curry, right, answers questions from 8th grade Sullivan Middle School (Mass.) students about his robot named "MXR - Mark's Exploration Robot" on Friday, June 15, 2012, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Curry's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles
NASA Technical Reports Server (NTRS)
Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.
2001-01-01
The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.
NASA Astrophysics Data System (ADS)
Shatravin, V.; Shashev, D. V.
2018-05-01
Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.
Robotic control and inspection verification
NASA Technical Reports Server (NTRS)
Davis, Virgil Leon
1991-01-01
Three areas of possible commercialization involving robots at the Kennedy Space Center (KSC) are discussed: a six degree-of-freedom target tracking system for remote umbilical operations; an intelligent torque sensing end effector for operating hand valves in hazardous locations; and an automatic radiator inspection device, a 13 by 65 foot robotic mechanism involving completely redundant motors, drives, and controls. Aspects concerning the first two innovations can be integrated to enable robots or teleoperators to perform tasks involving orientation and panal actuation operations that can be done with existing technology rather than waiting for telerobots to incorporate artificial intelligence (AI) to perform 'smart' autonomous operations. The third robot involves the application of complete control hardware redundancy to enable performance of work over and near expensive Space Shuttle hardware. The consumer marketplace may wish to explore commercialization of similiar component redundancy techniques for applications when a robot would not normally be used because of reliability concerns.
Women Warriors: Why the Robotics Revolution Changes the Combat Equation
2016-03-01
combat. U.S. Army RDECOM PRISM 6, no. 1 FEATURES | 91 Women Warriors Why the Robotics Revolution Changes the Combat Equation1 BY LINELL A. LETENDRE...underappreciated—fac- tor is poised to alter the women in combat debate: the revolution in robotics and autonomous systems. The technology leap afforded by...developing robotic and autonomous systems and their potential impact on the future of combat. Revolution in Robotics: A Changing Battlefield20 The
Sample Return Robot Centennial Challenge
2012-06-16
Posters for the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event are seen posted around the campus on Saturday, June 16, 2012 at WPI in Worcester, Mass. The TouchTomorrow event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
Panoramic of some of the exhibits available on the campus of the Worcester Polytechnic Institute (WPI) during their "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Anthony Shrout)
Object recognition for autonomous robot utilizing distributed knowledge database
NASA Astrophysics Data System (ADS)
Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji
2003-10-01
In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.
An Integrated Framework for Human-Robot Collaborative Manipulation.
Sheng, Weihua; Thobbi, Anand; Gu, Ye
2015-10-01
This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.
Cooperative Three-Robot System for Traversing Steep Slopes
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terrance; Aghazarian, Hrand; Younse, Paulo; Garrett, Michael
2009-01-01
Teamed Robots for Exploration and Science in Steep Areas (TRESSA) is a system of three autonomous mobile robots that cooperate with each other to enable scientific exploration of steep terrain (slope angles up to 90 ). Originally intended for use in exploring steep slopes on Mars that are not accessible to lone wheeled robots (Mars Exploration Rovers), TRESSA and systems like TRESSA could also be used on Earth for performing rescues on steep slopes and for exploring steep slopes that are too remote or too dangerous to be explored by humans. TRESSA is modeled on safe human climbing of steep slopes, two key features of which are teamwork and safety tethers. Two of the autonomous robots, denoted Anchorbots, remain at the top of a slope; the third robot, denoted the Cliffbot, traverses the slope. The Cliffbot drives over the cliff edge supported by tethers, which are payed out from the Anchorbots (see figure). The Anchorbots autonomously control the tension in the tethers to counter the gravitational force on the Cliffbot. The tethers are payed out and reeled in as needed, keeping the body of the Cliffbot oriented approximately parallel to the local terrain surface and preventing wheel slip by controlling the speed of descent or ascent, thereby enabling the Cliffbot to drive freely up, down, or across the slope. Due to the interactive nature of the three-robot system, the robots must be very tightly coupled. To provide for this tight coupling, the TRESSA software architecture is built on a combination of (1) the multi-robot layered behavior-coordination architecture reported in "An Architecture for Controlling Multiple Robots" (NPO-30345), NASA Tech Briefs, Vol. 28, No. 10 (October 2004), page 65, and (2) the real-time control architecture reported in "Robot Electronics Architecture" (NPO-41784), NASA Tech Briefs, Vol. 32, No. 1 (January 2008), page 28. The combination architecture makes it possible to keep the three robots synchronized and coordinated, to use data from all three robots for decision- making at each step, and to control the physical connections among the robots. In addition, TRESSA (as in prior systems that have utilized this architecture) , incorporates a capability for deterministic response to unanticipated situations from yet another architecture reported in Control Architecture for Robotic Agent Command and Sensing (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40. Tether tension control is a major consideration in the design and operation of TRESSA. Tension is measured by force sensors connected to each tether at the Cliffbot. The direction of the tension (both azimuth and elevation) is also measured. The tension controller combines a controller to counter gravitational force and an optional velocity controller that anticipates the motion of the Cliffbot. The gravity controller estimates the slope angle from the inclination of the tethers. This angle and the weight of the Cliffbot determine the total tension needed to counteract the weight of the Cliffbot. The total needed tension is broken into components for each Anchorbot. The difference between this needed tension and the tension measured at the Cliffbot constitutes an error signal that is provided to the gravity controller. The velocity controller computes the tether speed needed to produce the desired motion of the Cliffbot. Another major consideration in the design and operation of TRESSA is detection of faults. Each robot in the TRESSA system monitors its own performance and the performance of its teammates in order to detect any system faults and prevent unsafe conditions. At startup, communication links are tested and if any robot is not communicating, the system refuses to execute any motion commands. Prior to motion, the Anchorbots attempt to set tensions in the tethers at optimal levels for counteracting the weight of the Cliffbot; if either Anchorbot fails to reach its optimal tension level within a specified time, it sends message to the other robots and the commanded motion is not executed. If any mechanical error (e.g., stalling of a motor) is detected, the affected robot sends a message triggering stoppage of the current motion. Lastly, messages are passed among the robots at each time step (10 Hz) to share sensor information during operations. If messages from any robot cease for more than an allowable time interval, the other robots detect the communication loss and initiate stoppage.
Autonomous Fault Detection for Performance Bugs in Component Based Robotic Systems
2016-12-01
platform performs a modified version of the restaurant task from the RoboCup@Home competition 2015 [20]. Here, an operator first guides the robot around a...Control. Berlin: Springer, 2008. DOI: 10.1007/ 978-3-540-76304-8. [18] H. Zou and T. Hastie, “Regularization and variable selection via the elastic net
Sambot II: A self-assembly modular swarm robot
NASA Astrophysics Data System (ADS)
Zhang, Yuchao; Wei, Hongxing; Yang, Bo; Jiang, Cancan
2018-04-01
The new generation of self-assembly modular swarm robot Sambot II, based on the original generation of self-assembly modular swarm robot Sambot, adopting laser and camera module for information collecting, is introduced in this manuscript. The visual control algorithm of Sambot II is detailed and feasibility of the algorithm is verified by the laser and camera experiments. At the end of this manuscript, autonomous docking experiments of two Sambot II robots are presented. The results of experiments are showed and analyzed to verify the feasibility of whole scheme of Sambot II.
Survey of Command Execution Systems for NASA Spacecraft and Robots
NASA Technical Reports Server (NTRS)
Verma, Vandi; Jonsson, Ari; Simmons, Reid; Estlin, Tara; Levinson, Rich
2005-01-01
NASA spacecraft and robots operate at long distances from Earth Command sequences generated manually, or by automated planners on Earth, must eventually be executed autonomously onboard the spacecraft or robot. Software systems that execute commands onboard are known variously as execution systems, virtual machines, or sequence engines. Every robotic system requires some sort of execution system, but the level of autonomy and type of control they are designed for varies greatly. This paper presents a survey of execution systems with a focus on systems relevant to NASA missions.
Computational Mobility: An Overview
NASA Technical Reports Server (NTRS)
Suri, Niranjan
2005-01-01
This viewgraph presentation describes a framework for the autonomous control of robot swarms, which negotiate with each other, delegate authority to their peers, and cooperate in teams to accomplish tasks.
Ascending Stairway Modeling: A First Step Toward Autonomous Multi-Floor Exploration
2012-10-01
Many robotics platforms are capable of ascending stairways, but all existing approaches for autonomous stair climbing use stairway detection as a...the rich potential of an autonomous ground robot that can climb stairs while exploring a multi-floor building. Our proposed solution to this problem is...over several steps. However, many ground robots are not capable of traversing tight spiral stairs , and so we do not focus on these types. The stairway is
Planning Flight Paths of Autonomous Aerobots
NASA Technical Reports Server (NTRS)
Kulczycki, Eric; Elfes, Alberto; Sharma, Shivanjli
2009-01-01
Algorithms for planning flight paths of autonomous aerobots (robotic blimps) to be deployed in scientific exploration of remote planets are undergoing development. These algorithms are also adaptable to terrestrial applications involving robotic submarines as well as aerobots and other autonomous aircraft used to acquire scientific data or to perform surveying or monitoring functions.
1998-03-01
34Numerical Recipes in C," second edition, Cambridge University Press, Cambridge England, 1992. Marco, David , "Autonomous Control of Underwater...in the viewer. -202- LIST OF REFERENCES Ames, Andrea L., Nadeau, David R., Moreland, John L., VRML 2.0 Sourcebook, Second edition, John Wiley...McGhee, Bob, "The Phoenix Autonomous Underwater Vehicle," AI-Based Mobile Robots, editors David Kortenkamp, Pete Bonasso and Robin Murphy, MJT/AAAI
Design and Experimental Validation of a Simple Controller for a Multi-Segment Magnetic Crawler Robot
2015-04-01
Ave, Cambridge, MA USA 02139; bSpace and Naval Warfare (SPAWAR) Systems Center Pacific, San Diego, CA USA 92152 ABSTRACT A novel, multi-segmented...high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the...to be demonstrated.14 The Unmanned Systems Group at SPAWAR Systems Center Pacific has developed a multi-segment magnetic crawler robot (MSMR
Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-02-24
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.
Bengochea-Guevara, José M.; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-01-01
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them. PMID:26927102
NASA Astrophysics Data System (ADS)
Dağlarli, Evren; Temeltaş, Hakan
2007-04-01
This paper presents artificial emotional system based autonomous robot control architecture. Hidden Markov model developed as mathematical background for stochastic emotional and behavior transitions. Motivation module of architecture considered as behavioral gain effect generator for achieving multi-objective robot tasks. According to emotional and behavioral state transition probabilities, artificial emotions determine sequences of behaviors. Also motivational gain effects of proposed architecture can be observed on the executing behaviors during simulation.
Using robotics construction kits as metacognitive tools: a research in an Italian primary school.
La Paglia, Filippo; Caci, Barbara; La Barbera, Daniele; Cardaci, Maurizio
2010-01-01
The present paper is aimed at analyzing the process of building and programming robots as a metacognitive tool. Quantitative data and qualitative observations from a research performed in a sample of children attending an Italian primary school are described in this work. Results showed that robotics activities may be intended as a new metacognitive environment that allows children to monitor themselves and control their learning actions in an autonomous and self-centered way.
Sample Return Robot Centennial Challenge
2012-06-16
Visitors, some with their dogs, line up to make their photo inside a space suit exhibit during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
2016-01-01
satisfying journeys in my life. I would like to thank Ryan for his guidance through the truly exciting world of mobile robotics and robotic perception. Thank...Multi-session and Multi-robot SLAM . . . . . . . . . . . . . . . 15 1.3.3 Robust Techniques for SLAM Backends . . . . . . . . . . . . . . 18 1.4 A...sonar. xv CHAPTER 1 Introduction 1.1 The Importance of SLAM in Autonomous Robotics Autonomous mobile robots are becoming a promising aid in a wide
Sample Return Robot Centennial Challenge
2012-06-16
The bronze statue of the goat mascot for Worcester Polytechnic Institute (WPI) named "Gompei" is seen wearing a staff t-shirt for the "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
How to make an autonomous robot as a partner with humans: design approach versus emergent approach.
Fujita, M
2007-01-15
In this paper, we discuss what factors are important to realize an autonomous robot as a partner with humans. We believe that it is important to interact with people without boring them, using verbal and non-verbal communication channels. We have already developed autonomous robots such as AIBO and QRIO, whose behaviours are manually programmed and designed. We realized, however, that this design approach has limitations; therefore we propose a new approach, intelligence dynamics, where interacting in a real-world environment using embodiment is considered very important. There are pioneering works related to this approach from brain science, cognitive science, robotics and artificial intelligence. We assert that it is important to study the emergence of entire sets of autonomous behaviours and present our approach towards this goal.
A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring
Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio
2016-01-01
This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505
Interaction dynamics of multiple autonomous mobile robots in bounded spatial domains
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.
Improving Grasp Skills Using Schema Structured Learning
NASA Technical Reports Server (NTRS)
Platt, Robert; Grupen, ROderic A.; Fagg, Andrew H.
2006-01-01
Abstract In the control-based approach to robotics, complex behavior is created by sequencing and combining control primitives. While it is desirable for the robot to autonomously learn the correct control sequence, searching through the large number of potential solutions can be time consuming. This paper constrains this search to variations of a generalized solution encoded in a framework known as an action schema. A new algorithm, SCHEMA STRUCTURED LEARNING, is proposed that repeatedly executes variations of the generalized solution in search of instantiations that satisfy action schema objectives. This approach is tested in a grasping task where Dexter, the UMass humanoid robot, learns which reaching and grasping controllers maximize the probability of grasp success.
Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan
2017-01-01
In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964
Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan
2017-09-10
In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.
Research on Self-Reconfigurable Modular Robot System
NASA Astrophysics Data System (ADS)
Kamimura, Akiya; Murata, Satoshi; Yoshida, Eiichi; Kurokawa, Haruhisa; Tomita, Kohji; Kokaji, Shigeru
Growing complexity of artificial systems arises reliability and flexibility issues of large system design. Robots are not exception of this, and many attempts have been made to realize reliable and flexible robot systems. Distributed modular composition of robot is one of the most effective approaches to attain such abilities and has a potential to adapt to its surroundings by changing its configuration autonomously according to information of surroundings. In this paper, we propose a novel three-dimensional self-reconfigurable robotic module. Each module has a very simple structure that consists of two semi-cylindrical parts connected by a link. The modular system is capable of not only building static structure but also generating dynamic robotic motion. We present details of the mechanical/electrical design of the developed module and its control system architecture. Experiments using ten modules with centralized control demonstrate robotic configuration change, crawling locomotion and three types of quadruped locomotion.
Sample Return Robot Centennial Challenge
2012-06-15
SpacePRIDE Team members Chris Williamson, right, and Rob Moore, second from right, answer questions from 8th grade Sullivan Middle School (Mass.) students about their robot on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. SpacePRIDE's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Members of team Mountaineers pose with officials from the 2014 NASA Centennial Challenges Sample Return Robot Challenge on Saturday, June 14, 2014 at Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Mountaineer was the only team to complete the level one challenge this year. Team Mountaineer members, from left (in blue shirts) are: Ryan Watson, Marvin Cheng, Scott Harper, Jarred Strader, Lucas Behrens, Yu Gu, Tanmay Mandal, Alexander Hypes, and Nick Ohi Challenge judges and competition staff (in white and green polo shirts) from left are: Sam Ortega, NASA Centennial Challenge program manager; Ken Stafford, challenge technical advisor, WPI; Colleen Shaver, challenge event manager, WPI. During the competition, teams were required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge was to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Developing a Telescope Simulator Towards a Global Autonomous Robotic Telescope Network
NASA Astrophysics Data System (ADS)
Giakoumidis, N.; Ioannou, Z.; Dong, H.; Mavridis, N.
2013-05-01
A robotic telescope network is a system that integrates a number of telescopes to observe a variety of astronomical targets without being operated by a human. This system autonomously selects and observes targets in accordance to an optimized target. It dynamically allocates telescope resources depending on the observation requests, specifications of the telescopes, target visibility, meteorological conditions, daylight, location restrictions and availability and many other factors. In this paper, we introduce a telescope simulator, which can control a telescope to a desired position in order to observe a specific object. The system includes a Client Module, a Server Module, and a Dynamic Scheduler module. We make use and integrate a number of open source software to simulate the movement of a robotic telescope, the telescope characteristics, the observational data and weather conditions in order to test and optimize our system.
Autonomous Robotic Weapons: US Army Innovation for Ground Combat in the Twenty-First Century
2015-05-21
2013, accessed March 29, 2015, http://www.bbc.com/news/magazine-21576376?print=true. 113 Steven Kotler, “Say Hello to Comrade Terminator: Russia’s... hello -to-comrade-terminator-russias-army-of- killer-robots/. 114 David Hambling, “Russia Wants Autonomous Fighting Robots, and Lots of Them: Putin’s...how-humans-respond-to- robots-knight/HumanRobot-PartnershipsR2.pdf?la=en. Kotler, Steven. “Say Hello to Comrade Terminator: Russia’s Army of
Sample Return Robot Centennial Challenge
2012-06-16
A visitor to the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event helps demonstrate how a NASA rover design enables the rover to climb over obstacles higher than it's own body on Saturday, June 16, 2012 at WPI in Worcester, Mass. The event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Towards Autonomous Inspection of Space Systems Using Mobile Robotic Sensor Platforms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Saad, Ashraf; Litt, Jonathan S.
2007-01-01
The space transportation systems required to support NASA's Exploration Initiative will demand a high degree of reliability to ensure mission success. This reliability can be realized through autonomous fault/damage detection and repair capabilities. It is crucial that such capabilities are incorporated into these systems since it will be impractical to rely upon Extra-Vehicular Activity (EVA), visual inspection or tele-operation due to the costly, labor-intensive and time-consuming nature of these methods. One approach to achieving this capability is through the use of an autonomous inspection system comprised of miniature mobile sensor platforms that will cooperatively perform high confidence inspection of space vehicles and habitats. This paper will discuss the efforts to develop a small scale demonstration test-bed to investigate the feasibility of using autonomous mobile sensor platforms to perform inspection operations. Progress will be discussed in technology areas including: the hardware implementation and demonstration of robotic sensor platforms, the implementation of a hardware test-bed facility, and the investigation of collaborative control algorithms.
Simultaneous Planning and Control for Autonomous Ground Vehicles
2009-02-01
these applications is called A * ( A -star), and it was originally developed by Hart, Nilsson, and Raphael [HAR68]. Their research presented the formal...sequence, rather than a dynamic programming approach. A * search is a technique originally developed for Artificial Intelligence 43 applications ... developed at the Center for Intelligent Machines and Robotics, serves as a platform for the implementation and testing discussed. autonomous
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.
NASA Technical Reports Server (NTRS)
Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.
2003-01-01
Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.
Immune systems are not just for making you feel better: they are for controlling autonomous robots
NASA Astrophysics Data System (ADS)
Rosenblum, Mark
2005-05-01
The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.
2015 Marine Corps Security Environment Forecast: Futures 2030-2045
2015-01-01
The technologies that make the iPhone “smart” were publically funded—the Internet, wireless networks, the global positioning system, microelectronics...Energy Revolution (63 percent); Internet of Things (ubiquitous sensors embedded in interconnected computing devices) (50 percent); “Sci-Fi...Neuroscience & artificial intelligence - Sensors /control systems -Power & energy -Human-robot interaction Robots/autonomous systems will become part of the
Controlling the autonomy of a reconnaissance robot
NASA Astrophysics Data System (ADS)
Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David
2004-09-01
In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.
A power-autonomous self-rolling wheel using ionic and capacitive actuators
NASA Astrophysics Data System (ADS)
Must, Indrek; Kaasik, Toomas; Baranova, Inna; Johanson, Urmas; Punning, Andres; Aabloo, Alvo
2015-04-01
Ionic electroactive polymer (IEAP) laminates are often considered as perspective actuator technology for mobile robotic appliances; however, only a few real proof-of-concept-stage robots have been built previously, a majority of which are dependent on an off-board power supply. In this work, a power-autonomous robot, propelled by four IEAP actuators having carbonaceous electrodes, is constructed. The robot consists of a light outer section in the form of a hollow cylinder, and a heavy inner section, referred to as the rim and the hub, respectively. The hub is connected to the rim using IEAP actuators, which form `spokes' of variable length. The effective length of the spokes is changed via charging and discharging of the capacitive IEAP actuators and a change in the effective lengths of the spokes eventuate in a rolling motion of the robot. The constructed IEAP robot takes advantage of the distinctive properties of the IEAP actuators. The IEAP actuators transform the geometry of the whole robot, while being soft and compliant. The low-voltage IEAP actuators in the robot are powered directly from an embedded single-cell lithium-ion battery, with no voltage regulation required; instead, only the input current is regulated. The charging of the actuators is commuted correspondingly to the robot's transitory position using an on-board control electronics. The constructed robot is able to roll for an extended period on a smooth surface. The locomotion of the IEAP robot is analyzed using video recognition.
Human-Robot Teaming: From Space Robotics to Self-Driving Cars
NASA Technical Reports Server (NTRS)
Fong, Terry
2017-01-01
In this talk, I describe how NASA Ames has been developing and testing robots for space exploration. In our research, we have focused on studying how human-robot teams can increase the performance, reduce the cost, and increase the success of space missions. A key tenet of our work is that humans and robots should support one another in order to compensate for limitations of manual control and autonomy. This principle has broad applicability beyond space exploration. Thus, I will conclude by discussing how we have worked with Nissan to apply our methods to self-driving cars, enabling humans to support autonomous vehicles operating in unpredictable and difficult situations.
Fernandez-Leon, Jose A; Acosta, Gerardo G; Rozenfeld, Alejandro
2014-10-01
Researchers in diverse fields, such as in neuroscience, systems biology and autonomous robotics, have been intrigued by the origin and mechanisms for biological robustness. Darwinian evolution, in general, has suggested that adaptive mechanisms as a way of reaching robustness, could evolve by natural selection acting successively on numerous heritable variations. However, is this understanding enough for realizing how biological systems remain robust during their interactions with the surroundings? Here, we describe selected studies of bio-inspired systems that show behavioral robustness. From neurorobotics, cognitive, self-organizing and artificial immune system perspectives, our discussions focus mainly on how robust behaviors evolve or emerge in these systems, having the capacity of interacting with their surroundings. These descriptions are twofold. Initially, we introduce examples from autonomous robotics to illustrate how the process of designing robust control can be idealized in complex environments for autonomous navigation in terrain and underwater vehicles. We also include descriptions of bio-inspired self-organizing systems. Then, we introduce other studies that contextualize experimental evolution with simulated organisms and physical robots to exemplify how the process of natural selection can lead to the evolution of robustness by means of adaptive behaviors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Knowledge-based control for robot self-localization
NASA Technical Reports Server (NTRS)
Bennett, Bonnie Kathleen Holte
1993-01-01
Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.
Model learning for robot control: a survey.
Nguyen-Tuong, Duy; Peters, Jan
2011-11-01
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot's own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.
Gesture-Based Robot Control with Variable Autonomy from the JPL Biosleeve
NASA Technical Reports Server (NTRS)
Wolf, Michael T.; Assad, Christopher; Vernacchia, Matthew T.; Fromm, Joshua; Jethani, Henna L.
2013-01-01
This paper presents a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via a novel device, called the BioSleeve, which packages dry-contact surface electromyography (EMG) and an inertial measurement unit (IMU) into a sleeve worn on the forearm. The BioSleeve's accompanying algorithms can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. These gestures and positions are mapped to robot commands that, to varying degrees, integrate with the robot's perception of its environment and its ability to complete tasks autonomously. This flexible approach enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. The BioSleeve is meant for portable field use; unlike other gesture recognition systems, use of the BioSleeve for robot control is invariant to lighting conditions, occlusions, and the human-robot spatial relationship and does not encumber the user's hands. The BioSleeve control approach has been implemented on three robot types, and we present proof-of-principle demonstrations with mobile ground robots, manipulation robots, and prosthetic hands.
A power autonomous monopedal robot
NASA Astrophysics Data System (ADS)
Krupp, Benjamin T.; Pratt, Jerry E.
2006-05-01
We present the design and initial results of a power-autonomous planar monopedal robot. The robot is a gasoline powered, two degree of freedom robot that runs in a circle, constrained by a boom. The robot uses hydraulic Series Elastic Actuators, force-controllable actuators which provide high force fidelity, moderate bandwidth, and low impedance. The actuators are mounted in the body of the robot, with cable drives transmitting power to the hip and knee joints of the leg. A two-stroke, gasoline engine drives a constant displacement pump which pressurizes an accumulator. Absolute position and spring deflection of each of the Series Elastic Actuators are measured using linear encoders. The spring deflection is translated into force output and compared to desired force in a closed loop force-control algorithm implemented in software. The output signal of each force controller drives high performance servo valves which control flow to each of the pistons of the actuators. In designing the robot, we used a simulation-based iterative design approach. Preliminary estimates of the robot's physical parameters were based on past experience and used to create a physically realistic simulation model of the robot. Next, a control algorithm was implemented in simulation to produce planar hopping. Using the joint power requirements and range of motions from simulation, we worked backward specifying pulley diameter, piston diameter and stroke, hydraulic pressure and flow, servo valve flow and bandwidth, gear pump flow, and engine power requirements. Components that meet or exceed these specifications were chosen and integrated into the robot design. Using CAD software, we calculated the physical parameters of the robot design, replaced the original estimates with the CAD estimates, and produced new joint power requirements. We iterated on this process, resulting in a design which was prototyped and tested. The Monopod currently runs at approximately 1.2 m/s with the weight of all the power generating components, but powered from an off-board pump. On a test stand, the eventual on-board power system generates enough pressure and flow to meet the requirements of these runs and we are currently integrating the power system into the real robot. When operated from an off-board system without carrying the weight of the power generating components, the robot currently runs at approximately 2.25 m/s. Ongoing work is focused on integrating the power system into the robot, improving the control algorithm, and investigating methods for improving efficiency.
NASA Technical Reports Server (NTRS)
Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.
2012-01-01
A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.
Wei, Kun; Ren, Bingyin
2018-02-13
In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.
2011-06-01
effective way- point navigation algorithm that interfaced with a Java based graphical user interface (GUI), written by Uzun, for a robot named Bender [2...the angular acceleration, θ̈, or angular rate, θ̇. When considering a joint driven by an electric motor, the inertia and friction can be divided into...interactive simulations that can receive input from user controls, scripts , and other applications, such as Excel and MATLAB. One drawback is that the
Development of Live-working Robot for Power Transmission Lines
NASA Astrophysics Data System (ADS)
Yan, Yu; Liu, Xiaqing; Ren, Chengxian; Li, Jinliang; Li, Hui
2017-07-01
Dream-I, the first reconfigurable live-working robot for power transmission lines successfully developed in China, has the functions of autonomous walking on lines and accurately positioning. This paper firstly described operation task and object of the robot; then designed a general platform, an insulator replacement end and a drainage plate bolt fastening end of the robot, presented a control system of the robot, and performed simulation analysis on operation plan of the robot; and finally completed electrical field withstand voltage tests in a high voltage hall as well as online test and trial on actual lines. Experimental results show that by replacing ends of manipulators, the robot can fulfill operation tasks of live replacement of suspension insulators and live drainage plate bolt fastening.
Vision-based semi-autonomous outdoor robot system to reduce soldier workload
NASA Astrophysics Data System (ADS)
Richardson, Al; Rodgers, Michael H.
2001-09-01
Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.
Metrics of a Paradigm for Intelligent Control
NASA Technical Reports Server (NTRS)
Hexmoor, Henry
1999-01-01
We present metrics for quantifying organizational structures of complex control systems intended for controlling long-lived robotic or other autonomous applications commonly found in space applications. Such advanced control systems are often called integration platforms or agent architectures. Reported metrics span concerns about time, resources, software engineering, and complexities in the world.
Man-Robot Symbiosis: A Framework For Cooperative Intelligence And Control
NASA Astrophysics Data System (ADS)
Parker, Lynne E.; Pin, Francois G.
1988-10-01
The man-robot symbiosis concept has the fundamental objective of bridging the gap between fully human-controlled and fully autonomous systems to achieve true man-robot cooperative control and intelligence. Such a system would allow improved speed, accuracy, and efficiency of task execution, while retaining the man in the loop for innovative reasoning and decision-making. The symbiont would have capabilities for supervised and unsupervised learning, allowing an increase of expertise in a wide task domain. This paper describes a robotic system architecture facilitating the symbiotic integration of teleoperative and automated modes of task execution. The architecture reflects a unique blend of many disciplines of artificial intelligence into a working system, including job or mission planning, dynamic task allocation, man-robot communication, automated monitoring, and machine learning. These disciplines are embodied in five major components of the symbiotic framework: the Job Planner, the Dynamic Task Allocator, the Presenter/Interpreter, the Automated Monitor, and the Learning System.
Material handling robot system for flow-through storage applications
NASA Astrophysics Data System (ADS)
Dill, James F.; Candiloro, Brian; Downer, James; Wiesman, Richard; Fallin, Larry; Smith, Ron
1999-01-01
This paper describes the design, development and planned implementation of a system of mobile robots for use in flow through storage applications. The robots are being designed with on-board embedded controls so that they can perform their tasks as semi-autonomous workers distributed within a centrally controlled network. On the storage input side, boxes will be identified by bar-codes and placed into preassigned flow through bins. On the shipping side, orders will be forwarded to the robots from a central order processing station and boxes will be picked from designated storage bins following proper sequencing to permit direct loading into trucks for shipping. Because of the need to maintain high system availability, a distributed control strategy has been selected. When completed, the system will permit robots to be dynamically reassigned responsibilities if an individual unit fails. On-board health diagnostics and condition monitoring will be used to maintain high reliability of the units.
Multiple-Agent Air/Ground Autonomous Exploration Systems
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.
2007-01-01
Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.
A fault-tolerant intelligent robotic control system
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Tso, Kam Sing
1993-01-01
This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.
Spectrally Queued Feature Selection for Robotic Visual Odometery
2010-11-23
in these systems has yet to be defined. 1. INTRODUCTION 1.1 Uses of Autonomous Vehicles Autonomous vehicles have a wide range of possible...applications. In military situations, autonomous vehicles are valued for their ability to keep Soldiers far away from danger. A robot can inspect and disarm...just a glimpse of what engineers are hoping for in the future. 1.2 Biological Influence Autonomous vehicles are becoming more of a possibility in
Advances in Robotic, Human, and Autonomous Systems for Missions of Space Exploration
NASA Technical Reports Server (NTRS)
Gross, Anthony R.; Briggs, Geoffrey A.; Glass, Brian J.; Pedersen, Liam; Kortenkamp, David M.; Wettergreen, David S.; Nourbakhsh, I.; Clancy, Daniel J.; Zornetzer, Steven (Technical Monitor)
2002-01-01
Space exploration missions are evolving toward more complex architectures involving more capable robotic systems, new levels of human and robotic interaction, and increasingly autonomous systems. How this evolving mix of advanced capabilities will be utilized in the design of new missions is a subject of much current interest. Cost and risk constraints also play a key role in the development of new missions, resulting in a complex interplay of a broad range of factors in the mission development and planning of new missions. This paper will discuss how human, robotic, and autonomous systems could be used in advanced space exploration missions. In particular, a recently completed survey of the state of the art and the potential future of robotic systems, as well as new experiments utilizing human and robotic approaches will be described. Finally, there will be a discussion of how best to utilize these various approaches for meeting space exploration goals.
Development of autonomous grasping and navigating robot
NASA Astrophysics Data System (ADS)
Kudoh, Hiroyuki; Fujimoto, Keisuke; Nakayama, Yasuichi
2015-01-01
The ability to find and grasp target items in an unknown environment is important for working robots. We developed an autonomous navigating and grasping robot. The operations are locating a requested item, moving to where the item is placed, finding the item on a shelf or table, and picking the item up from the shelf or the table. To achieve these operations, we designed the robot with three functions: an autonomous navigating function that generates a map and a route in an unknown environment, an item position recognizing function, and a grasping function. We tested this robot in an unknown environment. It achieved a series of operations: moving to a destination, recognizing the positions of items on a shelf, picking up an item, placing it on a cart with its hand, and returning to the starting location. The results of this experiment show the applicability of reducing the workforce with robots.
Apparatus for multiprocessor-based control of a multiagent robot
NASA Technical Reports Server (NTRS)
Peters, II, Richard Alan (Inventor)
2009-01-01
An architecture for robot intelligence enables a robot to learn new behaviors and create new behavior sequences autonomously and interact with a dynamically changing environment. Sensory information is mapped onto a Sensory Ego-Sphere (SES) that rapidly identifies important changes in the environment and functions much like short term memory. Behaviors are stored in a DBAM that creates an active map from the robot's current state to a goal state and functions much like long term memory. A dream state converts recent activities stored in the SES and creates or modifies behaviors in the DBAM.
Bayón, C; Lerma, S; Ramírez, O; Serrano, J I; Del Castillo, M D; Raya, R; Belda-Lois, J M; Martínez, I; Rocon, E
2016-11-14
Cerebral Palsy (CP) is a disorder of posture and movement due to a defect in the immature brain. The use of robotic devices as alternative treatment to improve the gait function in patients with CP has increased. Nevertheless, current gait trainers are focused on controlling complete joint trajectories, avoiding postural control and the adaptation of the therapy to a specific patient. This paper presents the applicability of a new robotic platform called CPWalker in children with spastic diplegia. CPWalker consists of a smart walker with body weight and autonomous locomotion support and an exoskeleton for joint motion support. Likewise, CPWalker enables strategies to improve postural control during walking. The integrated robotic platform provides means for testing novel gait rehabilitation therapies in subjects with CP and similar motor disorders. Patient-tailored therapies were programmed in the device for its evaluation in three children with spastic diplegia for 5 weeks. After ten sessions of personalized training with CPWalker, the children improved the mean velocity (51.94 ± 41.97 %), cadence (29.19 ± 33.36 %) and step length (26.49 ± 19.58 %) in each leg. Post-3D gait assessments provided kinematic outcomes closer to normal values than Pre-3D assessments. The results show the potential of the novel robotic platform to serve as a rehabilitation tool. The autonomous locomotion and impedance control enhanced the children's participation during therapies. Moreover, participants' postural control was substantially improved, which indicates the usefulness of the approach based on promoting the patient's trunk control while the locomotion therapy is executed. Although results are promising, further studies with bigger sample size are required.
Supervising Remote Humanoids Across Intermediate Time Delay
NASA Technical Reports Server (NTRS)
Hambuchen, Kimberly; Bluethmann, William; Goza, Michael; Ambrose, Robert; Rabe, Kenneth; Allan, Mark
2006-01-01
The President's Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling humanoids under intermediate time delay is presented. This approach uses software running within a ground control cockpit to predict an immersed robot supervisor's motions which the remote humanoid autonomously executes. Initial results are presented.
Autonomy in robots and other agents.
Smithers, T
1997-06-01
The word "autonomous" has become widely used in artificial intelligence, robotics, and, more recently, artificial life and is typically used to qualify types of systems, agents, or robots: we see terms like "autonomous systems," "autonomous agents," and "autonomous robots." Its use in these fields is, however, both weak, with no distinctions being made that are not better and more precisely made with other existing terms, and varied, with no single underlying concept being involved. This ill-disciplined usage contrasts strongly with the use of the same term in other fields such as biology, philosophy, ethics, law, and human rights, for example. In all these quite different areas the concept of autonomy is essentially the same, though the language used and the aspects and issues of concern, of course, differ. In all these cases the underlying notion is one of self-law making and the closely related concept of self-identity. In this paper I argue that the loose and varied use of the term autonomous in artificial intelligence, robotics, and artificial life has effectively robbed these fields of an important concept. A concept essentially the same as we find it in biology, philosophy, ethics, and law, and one that is needed to distinguish a particular kind of agent or robot from those developed and built so far. I suggest that robots and other agents will have to be autonomous, i.e., self-law making, not just self-regulating, if they are to be able effectively to deal with the kinds of environments in which we live and work: environments which have significant large scale spatial and temporal invariant structure, but which also have large amounts of local spatial and temporal dynamic variation and unpredictability, and which lead to the frequent occurrence of previously unexperienced situations for the agents that interact with them.
NASA Astrophysics Data System (ADS)
Belyakov, Vladimir; Makarov, Vladimir; Zezyulin, Denis; Kurkin, Andrey; Pelinovsky, Efim
2015-04-01
Hazardous phenomena in the coastal zone lead to the topographic changing which are difficulty inspected by traditional methods. It is why those autonomous robots are used for collection of nearshore topographic and hydrodynamic measurements. The robot RTS-Hanna is well-known (Wubbold, F., Hentschel, M., Vousdoukas, M., and Wagner, B. Application of an autonomous robot for the collection of nearshore topographic and hydrodynamic measurements. Coastal Engineering Proceedings, 2012, vol. 33, Paper 53). We describe here several constructions of mobile systems developed in Laboratory "Transported Machines and Transported Complexes", Nizhny Novgorod State Technical University. They can be used in the field surveys and monitoring of wave regimes nearshore.
Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits
NASA Astrophysics Data System (ADS)
Snider, Ross K.; Arathorn, David W.
2006-05-01
A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.
The effect of collision avoidance for autonomous robot team formation
NASA Astrophysics Data System (ADS)
Seidman, Mark H.; Yang, Shanchieh J.
2007-04-01
As technology and research advance to the era of cooperative robots, many autonomous robot team algorithms have emerged. Shape formation is a common and critical task in many cooperative robot applications. While theoretical studies of robot team formation have shown success, it is unclear whether such algorithms will perform well in a real-world environment. This work examines the effect of collision avoidance schemes on an ideal circle formation algorithm, but behaves similarly if robot-to-robot communications are in place. Our findings reveal that robots with basic collision avoidance capabilities are still able to form into a circle, under most conditions. Moreover, the robot sizes, sensing ranges, and other critical physical parameters are examined to determine their effects on algorithm's performance.
Harnessing bistability for directional propulsion of soft, untethered robots.
Chen, Tian; Bilal, Osama R; Shea, Kristina; Daraio, Chiara
2018-05-29
In most macroscale robotic systems, propulsion and controls are enabled through a physical tether or complex onboard electronics and batteries. A tether simplifies the design process but limits the range of motion of the robot, while onboard controls and power supplies are heavy and complicate the design process. Here, we present a simple design principle for an untethered, soft swimming robot with preprogrammed, directional propulsion without a battery or onboard electronics. Locomotion is achieved by using actuators that harness the large displacements of bistable elements triggered by surrounding temperature changes. Powered by shape memory polymer (SMP) muscles, the bistable elements in turn actuate the robot's fins. Our robots are fabricated using a commercially available 3D printer in a single print. As a proof of concept, we show the ability to program a vessel, which can autonomously deliver a cargo and navigate back to the deployment point.
Monitoring robot actions for error detection and recovery
NASA Technical Reports Server (NTRS)
Gini, M.; Smith, R.
1987-01-01
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.
Experiments in autonomous robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamel, W.R.
1987-01-01
The Center for Engineering Systems Advanced Research (CESAR) is performing basic research in autonomous robotics for energy-related applications in hazardous environments. The CESAR research agenda includes a strong experimental component to assure practical evaluation of new concepts and theories. An evolutionary sequence of mobile research robots has been planned to support research in robot navigation, world sensing, and object manipulation. A number of experiments have been performed in studying robot navigation and path planning with planar sonar sensing. Future experiments will address more complex tasks involving three-dimensional sensing, dexterous manipulation, and human-scale operations.
Sample Return Robot Centennial Challenge
2012-06-16
NASA Program Manager for Centennial Challenges Sam Ortega help show a young visitor how to drive a rover as part of the interactive NASA Mars rover exhibit during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
NASA Deputy Administrator Lori Garver and NASA Chief Technologist Mason Peck stop to look at the bronze statue of the goat mascot for Worcester Polytechnic Institute (WPI) named "Gompei" that is wearing a staff t-shirt for the "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Perception system and functions for autonomous navigation in a natural environment
NASA Technical Reports Server (NTRS)
Chatila, Raja; Devy, Michel; Lacroix, Simon; Herrb, Matthieu
1994-01-01
This paper presents the approach, algorithms, and processes we developed for the perception system of a cross-country autonomous robot. After a presentation of the tele-programming context we favor for intervention robots, we introduce an adaptive navigation approach, well suited for the characteristics of complex natural environments. This approach lead us to develop a heterogeneous perception system that manages several different terrain representatives. The perception functionalities required during navigation are listed, along with the corresponding representations we consider. The main perception processes we developed are presented. They are integrated within an on-board control architecture we developed. First results of an ambitious experiment currently underway at LAAS are then presented.
Evolutionary online behaviour learning and adaptation in real robots.
Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne
2017-07-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.
A task control architecture for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid; Mitchell, Tom
1990-01-01
An architecture is presented for controlling robots that have multiple tasks, operate in dynamic domains, and require a fair degree of autonomy. The architecture is built on several layers of functionality, including a distributed communication layer, a behavior layer for querying sensors, expanding goals, and executing commands, and a task level for managing the temporal aspects of planning and achieving goals, coordinating tasks, allocating resources, monitoring, and recovering from errors. Application to a legged planetary rover and an indoor mobile manipulator is described.
Machine intelligence and autonomy for aerospace systems
NASA Technical Reports Server (NTRS)
Heer, Ewald (Editor); Lum, Henry (Editor)
1988-01-01
The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.
Autonomous Legged Hill and Stairwell Ascent
2011-11-01
environments with little burden to a human operator. Keywords: autonomous robot , hill climbing , stair climbing , sequential composition, hexapod, self...X-RHex robot on a set of stairs with laser scanner, IMU, wireless repeater, and handle payloads. making them useful for both climbing hills and...reconciliation into that more powerful (but restrictive) framework. 1) The Stair Climbing Behavior: RHex robots have been climbing single-flight stairs
Evolutionary Developmental Robotics: Improving Morphology and Control of Physical Robots.
Vujovic, Vuk; Rosendo, Andre; Brodbeck, Luzius; Iida, Fumiya
2017-01-01
Evolutionary algorithms have previously been applied to the design of morphology and control of robots. The design space for such tasks can be very complex, which can prevent evolution from efficiently discovering fit solutions. In this article we introduce an evolutionary-developmental (evo-devo) experiment with real-world robots. It allows robots to grow their leg size to simulate ontogenetic morphological changes, and this is the first time that such an experiment has been performed in the physical world. To test diverse robot morphologies, robot legs of variable shapes were generated during the evolutionary process and autonomously built using additive fabrication. We present two cases with evo-devo experiments and one with evolution, and we hypothesize that the addition of a developmental stage can be used within robotics to improve performance. Moreover, our results show that a nonlinear system-environment interaction exists, which explains the nontrivial locomotion patterns observed. In the future, robots will be present in our daily lives, and this work introduces for the first time physical robots that evolve and grow while interacting with the environment.
Melidis, Christos; Iizuka, Hiroyuki; Marocco, Davide
2018-05-01
In this paper, we present a novel approach to human-robot control. Taking inspiration from behaviour-based robotics and self-organisation principles, we present an interfacing mechanism, with the ability to adapt both towards the user and the robotic morphology. The aim is for a transparent mechanism connecting user and robot, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the case where the user has to read and understand an operation manual, or it has to learn to operate a specific device. Starting from a tabula rasa basis, the architecture is able to identify control patterns (behaviours) for the given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. The structural components of the interface are presented and assessed both individually and as a whole. Inherent properties of the architecture are presented and explained. At the same time, emergent properties are presented and investigated. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.
Concurrent planning and execution for a walking robot
NASA Astrophysics Data System (ADS)
Simmons, Reid
1990-07-01
The Planetary Rover project is developing the Ambler, a novel legged robot, and an autonomous software system for walking the Ambler over rough terrain. As part of the project, we have developed a system that integrates perception, planning, and real-time control to navigate a single leg of the robot through complex obstacle courses. The system is integrated using the Task Control Architecture (TCA), a general-purpose set of utilities for building and controlling distributed mobile robot systems. The walking system, as originally implemented, utilized a sequential sense-plan-act control cycle. This report describes efforts to improve the performance of the system by concurrently planning and executing steps. Concurrency was achieved by modifying the existing sequential system to utilize TCA features such as resource management, monitors, temporal constraints, and hierarchical task trees. Performance was increased in excess of 30 percent with only a relatively modest effort to convert and test the system. The results lend support to the utility of using TCA to develop complex mobile robot systems.
Path optimisation of a mobile robot using an artificial neural network controller
NASA Astrophysics Data System (ADS)
Singh, M. K.; Parhi, D. R.
2011-01-01
This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.
Three-Dimensional Images For Robot Vision
NASA Astrophysics Data System (ADS)
McFarland, William D.
1983-12-01
Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.
Training a Network of Electronic Neurons for Control of a Mobile Robot
NASA Astrophysics Data System (ADS)
Vromen, T. G. M.; Steur, E.; Nijmeijer, H.
An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.
Bio-inspired Computing for Robots
NASA Technical Reports Server (NTRS)
Laufenberg, Larry
2003-01-01
Living creatures may provide algorithms to enable active sensing/control systems in robots. Active sensing could enable planetary rovers to feel their way in unknown environments. The surface of Jupiter's moon Europa consists of fractured ice over a liquid sea that may contain microbes similar to those on Earth. To explore such extreme environments, NASA needs robots that autonomously survive, navigate, and gather scientific data. They will be too far away for guidance from Earth. They must sense their environment and control their own movements to avoid obstacles or investigate a science opportunity. To meet this challenge, CICT's Information Technology Strategic Research (ITSR) Project is funding neurobiologists at NASA's Jet Propulsion Laboratory (JPL) and selected universities to search for biologically inspired algorithms that enable robust active sensing and control for exploratory robots. Sources for these algorithms are living creatures, including rats and electric fish.
Autonomous Mobile Platform for Research in Cooperative Robotics
NASA Technical Reports Server (NTRS)
Daemi, Ali; Pena, Edward; Ferguson, Paul
1998-01-01
This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.
A prototype home robot with an ambient facial interface to improve drug compliance.
Takacs, Barnabas; Hanak, David
2008-01-01
We have developed a prototype home robot to improve drug compliance. The robot is a small mobile device, capable of autonomous behaviour, as well as remotely controlled operation via a wireless datalink. The robot is capable of face detection and also has a display screen to provide facial feedback to help motivate patients and thus increase their level of compliance. An RFID reader can identify tags attached to different objects, such as bottles, for fluid intake monitoring. A tablet dispenser allows drug compliance monitoring. Despite some limitations, experience with the prototype suggests that simple and low-cost robots may soon become feasible for care of people living alone or in isolation.
Soft Dielectric Elastomer Oscillators Driving Bioinspired Robots.
Henke, E-F Markus; Schlatter, Samuel; Anderson, Iain A
2017-12-01
Entirely soft robots with animal-like behavior and integrated artificial nervous systems will open up totally new perspectives and applications. To produce them, we must integrate control and actuation in the same soft structure. Soft actuators (e.g., pneumatic and hydraulic) exist but electronics are hard and stiff and remotely located. We present novel soft, electronics-free dielectric elastomer oscillators, which are able to drive bioinspired robots. As a demonstrator, we present a robot that mimics the crawling motion of the caterpillar, with an integrated artificial nervous system, soft actuators and without any conventional stiff electronic parts. Supplied with an external DC voltage, the robot autonomously generates all signals that are necessary to drive its dielectric elastomer actuators, and it translates an in-plane electromechanical oscillation into a crawling locomotion movement. Therefore, all functional and supporting parts are made of polymer materials and carbon. Besides the basic design of this first electronic-free, biomimetic robot, we present prospects to control the general behavior of such robots. The absence of conventional stiff electronics and the exclusive use of polymeric materials will provide a large step toward real animal-like robots, compliant human machine interfaces, and a new class of distributed, neuron-like internal control for robotic systems.
Target Trailing With Safe Navigation for Maritime Autonomous Surface Vehicles
NASA Technical Reports Server (NTRS)
Wolf, Michael; Kuwata, Yoshiaki; Zarzhitsky, Dimitri V.
2013-01-01
This software implements a motion-planning module for a maritime autonomous surface vehicle (ASV). The module trails a given target while also avoiding static and dynamic surface hazards. When surface hazards are other moving boats, the motion planner must apply International Regulations for Avoiding Collisions at Sea (COLREGS). A key subset of these rules has been implemented in the software. In case contact with the target is lost, the software can receive and follow a "reacquisition route," provided by a complementary system, until the target is reacquired. The programmatic intention is that the trailed target is a submarine, although any mobile naval platform could serve as the target. The algorithmic approach to combining motion with a (possibly moving) goal location, while avoiding local hazards, may be applicable to robotic rovers, automated landing systems, and autonomous airships. The software operates in JPL s CARACaS (Control Architecture for Robotic Agent Command and Sensing) software architecture and relies on other modules for environmental perception data and information on the predicted detectability of the target, as well as the low-level interface to the boat controls.
NASA Technical Reports Server (NTRS)
Dorais, Gregory A.; Nicewarner, Keith
2006-01-01
We present an multi-agent model-based autonomy architecture with monitoring, planning, diagnosis, and execution elements. We discuss an internal spacecraft free-flying robot prototype controlled by an implementation of this architecture and a ground test facility used for development. In addition, we discuss a simplified environment control life support system for the spacecraft domain also controlled by an implementation of this architecture. We discuss adjustable autonomy and how it applies to this architecture. We describe an interface that provides the user situation awareness of both autonomous systems and enables the user to dynamically edit the plans prior to and during execution as well as control these agents at various levels of autonomy. This interface also permits the agents to query the user or request the user to perform tasks to help achieve the commanded goals. We conclude by describing a scenario where these two agents and a human interact to cooperatively detect, diagnose and recover from a simulated spacecraft fault.
NASA Astrophysics Data System (ADS)
Hanford, Scott D.
Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.
Navigation strategies for multiple autonomous mobile robots moving in formation
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1991-01-01
The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.
Semi-autonomous exploration of multi-floor buildings with a legged robot
NASA Astrophysics Data System (ADS)
Wenger, Garrett J.; Johnson, Aaron M.; Taylor, Camillo J.; Koditschek, Daniel E.
2015-05-01
This paper presents preliminary results of a semi-autonomous building exploration behavior using the hexapedal robot RHex. Stairwells are used in virtually all multi-floor buildings, and so in order for a mobile robot to effectively explore, map, clear, monitor, or patrol such buildings it must be able to ascend and descend stairwells. However most conventional mobile robots based on a wheeled platform are unable to traverse stairwells, motivating use of the more mobile legged machine. This semi-autonomous behavior uses a human driver to provide steering input to the robot, as would be the case in, e.g., a tele-operated building exploration mission. The gait selection and transitions between the walking and stair climbing gaits are entirely autonomous. This implementation uses an RGBD camera for stair acquisition, which offers several advantages over a previously documented detector based on a laser range finder, including significantly reduced acquisition time. The sensor package used here also allows for considerable expansion of this behavior. For example, complete automation of the building exploration task driven by a mapping algorithm and higher level planner is presently under development.
Behavior-based multi-robot collaboration for autonomous construction tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.
Steering of an automated vehicle in an unstructured environment
NASA Astrophysics Data System (ADS)
Kanakaraju, Sampath; Shanmugasundaram, Sathish K.; Thyagarajan, Ramesh; Hall, Ernest L.
1999-08-01
The purpose of this paper is to describe a high-level path planning logic, which processes the data from a vision system and an ultrasonic obstacle avoidance system and steers an autonomous mobile robot between obstacles. The test bed was an autonomous root built at University of Cincinnati, and this logic was tested and debugged on this machine. Attempts have already been made to incorporate fuzzy system on a similar robot, and this paper extends them to take advantage of the robot's ZTR capability. Using the integrated vision syste, the vehicle senses its location and orientation. A rotating ultrasonic sensor is used to map the location and size of possible obstacles. With these inputs the fuzzy logic controls the speed and the steering decisions of the robot. With the incorporation of this logic, it has been observed that Bearcat II has been very successful in avoiding obstacles very well. This was achieved in the Ground Robotics Competition conducted by the AUVS in June 1999, where it travelled a distance of 154 feet in a 10ft. wide path ridden with obstacles. This logic proved to be a significant contributing factor in this feat of Bearcat II.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
The NASA Centennial Challenges prize, level one, is presented to team Mountaineers for successfully completing level one of the NASA 2014 Sample Return Robot Challenge, from left, Ryan Watson, Team Mountaineers; Lucas Behrens, Team Mountaineers; Jarred Strader, Team Mountaineers; Yu Gu, Team Mountaineers; Scott Harper, Team Mountaineers; Dorothy Rasco, NASA Deputy Associate Administrator for the Space Technology Mission Directorate; Laurie Leshin, Worcester Polytechnic Institute (WPI) President; David Miller, NASA Chief Technologist; Alexander Hypes, Team Mountaineers; Nick Ohi,Team Mountaineers; Marvin Cheng, Team Mountaineers; Sam Ortega, NASA Program Manager for Centennial Challenges; and Tanmay Mandal, Team Mountaineers;, Saturday, June 14, 2014, at Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Mountaineers was the only team to complete the level one challenge. During the competition, teams were required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge was to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
The NASA Centennial Challenges prize, level one, is presented to team Mountaineers for successfully completing level one of the NASA 2014 Sample Return Robot Challenge, from left, Ken Stafford, WPI Challenge technical advisor; Colleen Shaver, WPI Challenge Manager; Ryan Watson, Team Mountaineers; Marvin Cheng, Team Mountaineers; Alexander Hypes, Team Mountaineers; Jarred Strader, Team Mountaineers; Lucas Behrens, Team Mountaineers; Yu Gu, Team Mountaineers; Nick Ohi, Team Mountaineers; Dorothy Rasco, NASA Deputy Associate Administrator for the Space Technology Mission Directorate; Scott Harper, Team Mountaineers; Tanmay Mandal, Team Mountaineers; David Miller, NASA Chief Technologist; Sam Ortega, NASA Program Manager for Centennial Challenges, Saturday, June 14, 2014, at Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Mountaineers was the only team to complete the level one challenge. During the competition, teams were required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge was to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
The Challenge of Planning and Execution for Spacecraft Mobile Robots
NASA Technical Reports Server (NTRS)
Dorais, Gregory A.; Gawdiak, Yuri; Clancy, Daniel (Technical Monitor)
2002-01-01
The need for spacecraft mobile robots continues to grow. These robots offer the potential to increase the capability, productivity, and duration of space missions while decreasing mission risk and cost. Spacecraft Mobile Robots (SMRs) can serve a number of functions inside and outside of spacecraft from simpler tasks, such as performing visual diagnostics and crew support, to more complex tasks, such as performing maintenance and in-situ construction. One of the predominant challenges to deploying SMRs is to reduce the need for direct operator interaction. Teleoperation is often not practical due to the communication latencies incurred because of the distances involved and in many cases a crewmember would directly perform a task rather than teleoperate a robot to do it. By integrating a mixed-initiative constraint-based planner with an executive that supports adjustably autonomous control, we intend to demonstrate the feasibility of autonomous SMRs by deploying one inside the International Space Station (ISS) and demonstrate in simulation one that operates outside of the ISS. This paper discusses the progress made at NASA towards this end, the challenges ahead, and concludes with an invitation to the research community to participate.
An Intelligent Agent-Controlled and Robot-Based Disassembly Assistant
NASA Astrophysics Data System (ADS)
Jungbluth, Jan; Gerke, Wolfgang; Plapper, Peter
2017-09-01
One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate “the problem of loosening a connection and the distribution of the work” as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.
Distributed cooperating processes in a mobile robot control system
NASA Technical Reports Server (NTRS)
Skillman, Thomas L., Jr.
1988-01-01
A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.
Biologically-inspired adaptive obstacle negotiation behavior of hexapod robots
Goldschmidt, Dennis; Wörgötter, Florentin; Manoonpong, Poramate
2014-01-01
Neurobiological studies have shown that insects are able to adapt leg movements and posture for obstacle negotiation in changing environments. Moreover, the distance to an obstacle where an insect begins to climb is found to be a major parameter for successful obstacle negotiation. Inspired by these findings, we present an adaptive neural control mechanism for obstacle negotiation behavior in hexapod robots. It combines locomotion control, backbone joint control, local leg reflexes, and neural learning. While the first three components generate locomotion including walking and climbing, the neural learning mechanism allows the robot to adapt its behavior for obstacle negotiation with respect to changing conditions, e.g., variable obstacle heights and different walking gaits. By successfully learning the association of an early, predictive signal (conditioned stimulus, CS) and a late, reflex signal (unconditioned stimulus, UCS), both provided by ultrasonic sensors at the front of the robot, the robot can autonomously find an appropriate distance from an obstacle to initiate climbing. The adaptive neural control was developed and tested first on a physical robot simulation, and was then successfully transferred to a real hexapod robot, called AMOS II. The results show that the robot can efficiently negotiate obstacles with a height up to 85% of the robot's leg length in simulation and 75% in a real environment. PMID:24523694
2012-09-01
away from the MOCU. The semi-autonomous mode was preferred over the teleoperated mode for multitasking , maintaining SA, avoiding obstacles, and...0 23 Software with icons 0 0 0 0 2 25 Pull-down menu * 0 0 0 0 3 24 Graphics/drawing features in software packages* 3 8 1 4 3 8 Email 1 0 0 0 1...r. Navigate to the next waypoint or set of hash lines 5.27 5.08 6.25 s. Ability to multitask (operate/monitor robot and communicate on the radio
Methods and Apparatus for Autonomous Robotic Control
NASA Technical Reports Server (NTRS)
Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)
2017-01-01
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots.
Sherwin, Tyrone; Easte, Mikala; Chen, Andrew Tzer-Yeu; Wang, Kevin I-Kai; Dai, Wenbin
2018-02-14
Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.
A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots
Sherwin, Tyrone; Easte, Mikala; Wang, Kevin I-Kai; Dai, Wenbin
2018-01-01
Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system. PMID:29443906
Efforts toward an autonomous wheelchair - biomed 2011.
Barrett, Steven; Streeter, Robert
2011-01-01
An autonomous wheelchair is in development to provide mobility to those with significant physical challenges. The overall goal of the project is to develop a wheelchair that is fully autonomous with the ability to navigate about an environment and negotiate obstacles. As a starting point for the project, we have reversed engineered the joystick control system of an off-the-shelf commercially available wheelchair. The joystick control has been replaced with a microcontroller based system. The microcontroller has the capability to interface with a number of subsystems currently under development including wheel odometers, obstacle avoidance sensors, and ultrasonic-based wall sensors. This paper will discuss the microcontroller based system and provide a detailed system description. Results of this study may be adapted to commercial or military robot control.
Deployment of Shaped Charges by a Semi-Autonomous Ground Vehicle
2007-06-01
lives on a daily basis. BigFoot seeks to replace the local human component by deploying and remotely detonating shaped charges to destroy IEDs...robotic arm to deploy and remotely detonate shaped charges. BigFoot incorporates improved communication range over previous Autonomous Ground Vehicles...and an updated user interface that includes controls for the arm and camera by interfacing multiple microprocessors. BigFoot is capable of avoiding
Autonomous dexterous end-effectors for space robotics
NASA Technical Reports Server (NTRS)
Bekey, George A.; Iberall, Thea; Liu, Huan
1989-01-01
The development of a knowledge-based controller is summarized for the Belgrade/USC robot hand, a five-fingered end effector, designed for maximum autonomy. The biological principles of the hand and its architecture are presented. The conceptual and software aspects of the grasp selection system are discussed, including both the effects of the geometry of the target object and the task to be performed. Some current research issues are presented.
Natural Language Direction Following for Robots in Unstructured Unknown Environments
2015-01-15
Looking back, I can safely say my most fruitful research was the result of these collaborations. Seeing peers learn and struggle alongside me has been...performance gains on such diverse problems as autonomous driving, playing Super Mario, handwriting recogni- tion, helicopter control, and image...similarity metric between what the direction says and what the robot sees. These are useful to describe the landmark field of the Spatial Description
Evaluating the Dynamics of Agent-Environment Interaction
2001-05-01
a color sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangulation system for...Cooperative Mobile Robot Control’, Autonomous Robots 4(4), 387{403. Vaughan, R. T., Sty, K., Sukhatme, G. S. & Mataric, M. J. (2000), Whistling in the Dark...sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangu- lation system for
Evolution of a radio communication relay system
NASA Astrophysics Data System (ADS)
Nguyen, Hoa G.; Pezeshkian, Narek; Hart, Abraham; Burmeister, Aaron; Holz, Kevin; Neff, Joseph; Roth, Leif
2013-05-01
Providing long-distance non-line-of-sight control for unmanned ground robots has long been recognized as a problem, considering the nature of the required high-bandwidth radio links. In the early 2000s, the DARPA Mobile Autonomous Robot Software (MARS) program funded the Space and Naval Warfare Systems Center (SSC) Pacific to demonstrate a capability for autonomous mobile communication relaying on a number of Pioneer laboratory robots. This effort also resulted in the development of ad hoc networking radios and software that were later leveraged in the development of a more practical and logistically simpler system, the Automatically Deployed Communication Relays (ADCR). Funded by the Joint Ground Robotics Enterprise and internally by SSC Pacific, several generations of ADCR systems introduced increasingly more capable hardware and software for automatic maintenance of communication links through deployment of static relay nodes from mobile robots. This capability was finally tapped in 2010 to fulfill an urgent need from theater. 243 kits of ruggedized, robot-deployable communication relays were produced and sent to Afghanistan to extend the range of EOD and tactical ground robots in 2012. This paper provides a summary of the evolution of the radio relay technology at SSC Pacific, and then focuses on the latest two stages, the Manually-Deployed Communication Relays and the latest effort to automate the deployment of these ruggedized and fielded relay nodes.
Algorithms of walking and stability for an anthropomorphic robot
NASA Astrophysics Data System (ADS)
Sirazetdinov, R. T.; Devaev, V. M.; Nikitina, D. V.; Fadeev, A. Y.; Kamalov, A. R.
2017-09-01
Autonomous movement of an anthropomorphic robot is considered as a superposition of a set of typical elements of movement - so-called patterns, each of which can be considered as an agent of some multi-agent system [ 1 ]. To control the AP-601 robot, an information and communication infrastructure has been created that represents some multi-agent system that allows the development of algorithms for individual patterns of moving and run them in the system as a set of independently executed and interacting agents. The algorithms of lateral movement of the anthropomorphic robot AP-601 series with active stability due to the stability pattern are presented.
The role of robotics in computer controlled polishing of large and small optics
NASA Astrophysics Data System (ADS)
Walker, David; Dunn, Christina; Yu, Guoyu; Bibby, Matt; Zheng, Xiao; Wu, Hsing Yu; Li, Hongyu; Lu, Chunlian
2015-08-01
Following formal acceptance by ESO of three 1.4m hexagonal off-axis prototype mirror segments, one circular segment, and certification of our optical test facility, we turn our attention to the challenge of segment mass-production. In this paper, we focus on the role of industrial robots, highlighting complementarity with Zeeko CNC polishing machines, and presenting results using robots to provide intermediate processing between CNC grinding and polishing. We also describe the marriage of robots and Zeeko machines to automate currently manual operations; steps towards our ultimate vision of fully autonomous manufacturing cells, with impact throughout the optical manufacturing community and beyond.
Architecture of autonomous systems
NASA Technical Reports Server (NTRS)
Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.
1986-01-01
Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.
Architecture of autonomous systems
NASA Technical Reports Server (NTRS)
Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.
1989-01-01
Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.
Autonomous assistance navigation for robotic wheelchairs in confined spaces.
Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F
2010-01-01
In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.
Autonomous biomorphic robots as platforms for sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tilden, M.; Hasslacher, B.; Mainieri, R.
1996-10-01
The idea of building autonomous robots that can carry out complex and nonrepetitive tasks is an old one, so far unrealized in any meaningful hardware. Tilden has shown recently that there are simple, processor-free solutions to building autonomous mobile machines that continuously adapt to unknown and hostile environments, are designed primarily to survive, and are extremely resistant to damage. These devices use smart mechanics and simple (low component count) electronic neuron control structures having the functionality of biological organisms from simple invertebrates to sophisticated members of the insect and crab family. These devices are paradigms for the development of autonomousmore » machines that can carry out directed goals. The machine then becomes a robust survivalist platform that can carry sensors or instruments. These autonomous roving machines, now in an early stage of development (several proof-of-concept prototype walkers have been built), can be developed so that they are inexpensive, robust, and versatile carriers for a variety of instrument packages. Applications are immediate and many, in areas as diverse as prosthetics, medicine, space, construction, nanoscience, defense, remote sensing, environmental cleanup, and biotechnology.« less
The trade-off between morphology and control in the co-optimized design of robots.
Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.
The trade-off between morphology and control in the co-optimized design of robots
Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482
NASA Astrophysics Data System (ADS)
Kelkar, Nikhal; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The controller incorporates a fuzzy logic approach for steering and speed control, a neuro-fuzzy approach for ultrasound sensing (not discussed in this paper) and an overall expert system. The advantages of a modular system are related to portability and transportability, i.e. any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors. The speed and steering fuzzy logic controller is supervised by a 486 computer through a multi-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. This micro- controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system in which high speed computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected by a vision tracking device that transmits the X, Y coordinates of the lane marker to the control computer. Simulation and testing of these systems yielded promising results. This design, in its modularity, creates a portable autonomous fuzzy logic controller applicable to any mobile vehicle with only minor adaptations.
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
Research state-of-the-art of mobile robots in China
NASA Astrophysics Data System (ADS)
Wu, Lin; Zhao, Jinglun; Zhang, Peng; Li, Shiqing
1991-03-01
Several newly developed mobile robots in china are described in the paper. It includes masterslave telerobot sixleged robot biped walking robot remote inspection robot crawler moving robot and autonomous mobi le vehicle . Some relevant technology are also described.
Cooperative crossing of traffic intersections in a distributed robot system
NASA Astrophysics Data System (ADS)
Rausch, Alexander; Oswald, Norbert; Levi, Paul
1995-09-01
In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.
The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot.
Kitson, Philip J; Glatzel, Stefan; Cronin, Leroy
2016-01-01
An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic 'programs' which can run on similar low cost, user-constructed robotic platforms towards an 'open-source' regime in the area of chemical synthesis.
The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot
Kitson, Philip J; Glatzel, Stefan
2016-01-01
An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic ‘programs’ which can run on similar low cost, user-constructed robotic platforms towards an ‘open-source’ regime in the area of chemical synthesis. PMID:28144350
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
Autonomous learning based on cost assumptions: theoretical studies and experiments in robot control.
Ribeiro, C H; Hemerly, E M
2000-02-01
Autonomous learning techniques are based on experience acquisition. In most realistic applications, experience is time-consuming: it implies sensor reading, actuator control and algorithmic update, constrained by the learning system dynamics. The information crudeness upon which classical learning algorithms operate make such problems too difficult and unrealistic. Nonetheless, additional information for facilitating the learning process ideally should be embedded in such a way that the structural, well-studied characteristics of these fundamental algorithms are maintained. We investigate in this article a more general formulation of the Q-learning method that allows for a spreading of information derived from single updates towards a neighbourhood of the instantly visited state and converges to optimality. We show how this new formulation can be used as a mechanism to safely embed prior knowledge about the structure of the state space, and demonstrate it in a modified implementation of a reinforcement learning algorithm in a real robot navigation task.
Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation
NASA Technical Reports Server (NTRS)
Lacaze, Alberto; Meystel, Michael; Meystel, Alex
1994-01-01
This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.; Duran, Steve G.; Braun, Angela N.; Straube, Timothy M.; Mitchell, Jennifer D.
2006-01-01
The NASA Johnson Space Center has developed a nanosatellite-class Free Flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam Free Flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35-pound, 14-inch diameter AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, power, propulsion, and imaging subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations, including automatic stationkeeping, point-to-point maneuvering, and waypoint tracking. The Mini AERCam Free Flyer is accompanied by a sophisticated control station for command and control, as well as a docking system for automated deployment, docking, and recharge at a parent spacecraft. Free Flyer functional testing has been conducted successfully on both an airbearing table and in a six-degree-of-freedom closed-loop orbital simulation with avionics hardware in the loop. Mini AERCam aims to provide beneficial on-orbit views that cannot be obtained from fixed cameras, cameras on robotic manipulators, or cameras carried by crewmembers during extravehicular activities (EVA s). On Shuttle or International Space Station (ISS), for example, Mini AERCam could support external robotic operations by supplying orthogonal views to the intravehicular activity (IVA) robotic operator, supply views of EVA operations to IVA and/or ground crews monitoring the EVA, and carry out independent visual inspections of areas of interest around the spacecraft. To enable these future benefits with minimal impact on IVA operators and ground controllers, the Mini AERCam system architecture incorporates intelligent systems attributes that support various autonomous capabilities. 1) A robust command sequencer enables task-level command scripting. Command scripting is employed for operations such as automatic inspection scans over a region of interest, and operator-hands-off automated docking. 2) A system manager built on the same expert-system software as the command sequencer provides detection and smart-response capability for potential system-level anomalies, like loss of communications between the Free Flyer and control station. 3) An AERCam dynamics manager provides nominal and off-nominal management of guidance, navigation, and control (GN&C) functions. It is employed for safe trajectory monitoring, contingency maneuvering, and related roles. This paper will describe these architectural components of Mini AERCam autonomy, as well as the interaction of these elements with a human operator during supervised autonomous control.
Fusing Laser Reflectance and Image Data for Terrain Classification for Small Autonomous Robots
2014-12-01
limit us to low power, lightweight sensors , and a maximum range of approximately 5 meters. Contrast these robot characteristics to typical terrain...classifi- cation work which uses large autonomous ground vehicles with sensors mounted high above the ground. Terrain classification for small autonomous...into predefined classes [10], [11]. However, wheeled vehicles offer the ability to use non-traditional sensors such as vibration sensors [12] and
NASA Technical Reports Server (NTRS)
Martin-Alvarez, A.; Hayati, S.; Volpe, R.; Petras, R.
1999-01-01
An advanced design and implementation of a Control Architecture for Long Range Autonomous Planetary Rovers is presented using a hierarchical top-down task decomposition, and the common structure of each design is presented based on feedback control theory. Graphical programming is presented as a common intuitive language for the design when a large design team is composed of managers, architecture designers, engineers, programmers, and maintenance personnel. The whole design of the control architecture consists in the classic control concepts of cyclic data processing and event-driven reaction to achieve all the reasoning and behaviors needed. For this purpose, a commercial graphical tool is presented that includes the mentioned control capabilities. Messages queues are used for inter-communication among control functions, allowing Artificial Intelligence (AI) reasoning techniques based on queue manipulation. Experimental results show a highly autonomous control system running in real time on top the JPL micro-rover Rocky 7 controlling simultaneously several robotic devices. This paper validates the sinergy between Artificial Intelligence and classic control concepts in having in advanced Control Architecture for Long Range Autonomous Planetary Rovers.
INL Autonomous Navigation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.
Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2
2015-03-01
distribution is unlimited. 13. SUPPLEMENTARY NOTES DCS Corporation, Alexandria, VA 14. ABSTRACT In the past, robot operation has been a high-cognitive...increase performance and reduce perceived workload. The aids were overlays displaying what an autonomous robot perceived in the environment and the...subsequent course of action planned by the robot . Eight active-duty, US Army Soldiers completed 16 scenario missions using an operator interface
NASA Astrophysics Data System (ADS)
Schubert, Oliver J.; Tolle, Charles R.
2004-09-01
Over the last decade the world has seen numerous autonomous vehicle programs. Wheels and track designs are the basis for many of these vehicles. This is primarily due to four main reasons: a vast preexisting knowledge base for these designs, energy efficiency of power sources, scalability of actuators, and the lack of control systems technologies for handling alternate highly complex distributed systems. Though large efforts seek to improve the mobility of these vehicles, many limitations still exist for these systems within unstructured environments, e.g. limited mobility within industrial and nuclear accident sites where existing plant configurations have been extensively changed. These unstructured operational environments include missions for exploration, reconnaissance, and emergency recovery of objects within reconfigured or collapsed structures, e.g. bombed buildings. More importantly, these environments present a clear and present danger for direct human interactions during the initial phases of recovery operations. Clearly, the current classes of autonomous vehicles are incapable of performing in these environments. Thus the next generation of designs must include highly reconfigurable and flexible autonomous robotic platforms. This new breed of autonomous vehicles will be both highly flexible and environmentally adaptable. Presented in this paper is one of the most successful designs from nature, the snake-eel-worm (SEW). This design implements shape memory alloy (SMA) actuators which allow for scaling of the robotic SEW designs from sub-micron scale to heavy industrial implementations without major conceptual redesigns as required in traditional hydraulic, pneumatic, or motor driven systems. Autonomous vehicles based on the SEW design posses the ability to easily move between air based environments and fluid based environments with limited or no reconfiguration. Under a SEW designed vehicle, one not only achieves vastly improved maneuverability within a highly unstructured environment, but also gains robotic manipulation abilities, normally relegated as secondary add-ons within existing vehicles, all within one small condensed package. The prototype design presented includes a Beowulf style computing system for advanced guidance calculations and visualization computations. All of the design and implementation pertaining to the SEW robot discussed in this paper is the product of a student team under the summer fellowship program at the DOEs INEEL.
NASA Astrophysics Data System (ADS)
Leahy, M. B., Jr.; Cassiday, B. K.
1993-02-01
Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. Race is an organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. Small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALC's will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry, we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.
NASA Astrophysics Data System (ADS)
Leahy, Michael B., Jr.; Cassiday, Brian K.
1992-11-01
Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. An organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. The small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALCs will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.
NASA Technical Reports Server (NTRS)
Leahy, M. B., Jr.; Cassiday, B. K.
1993-01-01
Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. Race is an organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. Small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALC's will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry, we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.
From Autonomous Robots to Artificial Ecosystems
NASA Astrophysics Data System (ADS)
Mastrogiovanni, Fulvio; Sgorbissa, Antonio; Zaccaria, Renato
During the past few years, starting from the two mainstream fields of Ambient Intelligence [2] and Robotics [17], several authors recognized the benefits of the socalled Ubiquitous Robotics paradigm. According to this perspective, mobile robots are no longer autonomous, physically situated and embodied entities adapting themselves to a world taliored for humans: on the contrary, they are able to interact with devices distributed throughout the environment and get across heterogeneous information by means of communication technologies. Information exchange, coupled with simple actuation capabilities, is meant to replace physical interaction between robots and their environment. Two benefits are evident: (i) smart environments overcome inherent limitations of mobile platforms, whereas (ii) mobile robots offer a mobility dimension unknown to smart environments.
NASA Astrophysics Data System (ADS)
Singh, Surya P. N.; Thayer, Scott M.
2002-02-01
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.
Control of a free-flying robot manipulator system
NASA Technical Reports Server (NTRS)
Alexander, H.; Cannon, R. H., Jr.
1985-01-01
The goal of the research is to develop and test control strategies for a self-contained, free flying space robot. Such a robot would perform operations in space similar to those currently handled by astronauts during extravehicular activity (EVA). The focus of the work is to develop and carry out a program of research with a series of physical Satellite Robot Simulator Vehicles (SRSV's), two-dimensionally freely mobile laboratory models of autonomous free-flying space robots such as might perform extravehicular functions associated with operation of a space station or repair of orbiting satellites. The development of the SRSV and of some of the controller subsystems are discribed. The two-link arm was fitted to the SRSV base, and researchers explored the open-loop characteristics of the arm and thruster actuators. Work began on building the software foundation necessary for use of the on-board computer, as well as hardware and software for a local vision system for target identification and tracking.
An integrated dexterous robotic testbed for space applications
NASA Technical Reports Server (NTRS)
Li, Larry C.; Nguyen, Hai; Sauer, Edward
1992-01-01
An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.
Brief Report: Development of a Robotic Intervention Platform for Young Children with ASD.
Warren, Zachary; Zheng, Zhi; Das, Shuvajit; Young, Eric M; Swanson, Amy; Weitlauf, Amy; Sarkar, Nilanjan
2015-12-01
Increasingly researchers are attempting to develop robotic technologies for children with autism spectrum disorder (ASD). This pilot study investigated the development and application of a novel robotic system capable of dynamic, adaptive, and autonomous interaction during imitation tasks with embedded real-time performance evaluation and feedback. The system was designed to incorporate both a humanoid robot and a human examiner. We compared child performance within system across these conditions in a sample of preschool children with ASD (n = 8) and a control sample of typically developing children (n = 8). The system was well-tolerated in the sample, children with ASD exhibited greater attention to the robotic system than the human administrator, and for children with ASD imitation performance appeared superior during the robotic interaction.
Robot Manipulator Technologies for Planetary Exploration
NASA Technical Reports Server (NTRS)
Das, H.; Bao, X.; Bar-Cohen, Y.; Bonitz, R.; Lindemann, R.; Maimone, M.; Nesnas, I.; Voorhees, C.
1999-01-01
NASA exploration missions to Mars, initiated by the Mars Pathfinder mission in July 1997, will continue over the next decade. The missions require challenging innovations in robot design and improvements in autonomy to meet ambitious objectives under tight budget and time constraints. The authors are developing design tools, component technologies and capabilities to address these needs for manipulation with robots for planetary exploration. The specific developments are: 1) a software analysis tool to reduce robot design iteration cycles and optimize on design solutions, 2) new piezoelectric ultrasonic motors (USM) for light-weight and high torque actuation in planetary environments, 3) use of advanced materials and structures for strong and light-weight robot arms and 4) intelligent camera-image coordinated autonomous control of robot arms for instrument placement and sample acquisition from a rover vehicle.
NASA Technical Reports Server (NTRS)
Mann, R. C.; Fujimura, K.; Unseren, M. A.
1992-01-01
One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of position and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace.
Evolutionary online behaviour learning and adaptation in real robots
Correia, Luís; Christensen, Anders Lyhne
2017-01-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130
EVALUATING ROBOT TECHNOLOGIES AS TOOLS TO EXPLORE RADIOLOGICAL AND OTHER HAZARDOUS ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; David I. Gertman; David J. Bruemmer
2008-03-01
There is a general consensus that robots could be beneficial in performing tasks within hazardous radiological environments. Most control of robots in hazardous environments involves master-slave or teleoperation relationships between the human and the robot. While teleoperation-based solutions keep humans out of harms way, they also change the training requirements to accomplish a task. In this paper we present a research methodology that allowed scientists at Idaho National Laboratory to identify, develop, and prove a semi-autonomous robot solution for search and characterization tasks within a hazardous environment. Two experiments are summarized that validated the use of semi-autonomy and show thatmore » robot autonomy can help mitigate some of the performance differences between operators who have different levels of robot experience, and can improve performance over teleoperated systems.« less
NASA Technical Reports Server (NTRS)
Fogel, L. J.; Calabrese, P. G.; Walsh, M. J.; Owens, A. J.
1982-01-01
Ways in which autonomous behavior of spacecraft can be extended to treat situations wherein a closed loop control by a human may not be appropriate or even possible are explored. Predictive models that minimize mean least squared error and arbitrary cost functions are discussed. A methodology for extracting cyclic components for an arbitrary environment with respect to usual and arbitrary criteria is developed. An approach to prediction and control based on evolutionary programming is outlined. A computer program capable of predicting time series is presented. A design of a control system for a robotic dense with partially unknown physical properties is presented.
Distributed Planning and Control for Teams of Cooperating Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
2004-06-15
This CRADA project involved the cooperative research of investigators in ORNL's Center for Engineering Science Advanced Research (CESAR) with researchers at Caterpillar, Inc. The subject of the research was the development of cooperative control strategies for autonomous vehicles performing applications of interest to Caterpillar customers. The project involved three Phases of research, conducted over the time period of November 1998 through December 2001. This project led to the successful development of several technologies and demonstrations in realistic simulation that illustrated the effectiveness of the control approaches for distributed planning and cooperation in multi-robot teams.
NASA Technical Reports Server (NTRS)
Steffen, Chris
1990-01-01
An overview of the time-delay problem and the reliability problem which arise in trying to perform robotic construction operations at a remote space location are presented. The effects of the time-delay upon the control system design will be itemized. A high level overview of a decentralized method of control which is expected to perform better than the centralized approach in solving the time-delay problem is given. The lower level, decentralized, autonomous, Troter Move-Bar algorithm is also presented (Troters are coordinated independent robots). The solution of the reliability problem is connected to adding redundancy to the system. One method of adding redundancy is given.
Integrating Artificial Immune, Neural and Endrocine Systems in Autonomous Sailing Robots
2010-09-24
system - Development of an adaptive hormone system capable of changing operation and control of the neural network depending on changing enviromental ...and control of the neural network depending on changing enviromental conditions • First basic design of the MOOP and a simple neural-endocrine based
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.; Fujimura, K.; Unseren, M.A.
One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR)at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of positionmore » and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace. 15 refs., 3 figs.« less
Artificial evolution: a new path for artificial intelligence?
Husbands, P; Harvey, I; Cliff, D; Miller, G
1997-06-01
Recently there have been a number of proposals for the use of artificial evolution as a radically new approach to the development of control systems for autonomous robots. This paper explains the artificial evolution approach, using work at Sussex to illustrate it. The paper revolves around a case study on the concurrent evolution of control networks and visual sensor morphologies for a mobile robot. Wider intellectual issues surrounding the work are discussed, as is the use of more abstract evolutionary simulations as a new potentially useful tool in theoretical biology.
Adaptive walking of a quadrupedal robot based on layered biological reflexes
NASA Astrophysics Data System (ADS)
Zhang, Xiuli; Mingcheng, E.; Zeng, Xiangyu; Zheng, Haojun
2012-07-01
A multiple-legged robot is traditionally controlled by using its dynamic model. But the dynamic-model-based approach fails to acquire satisfactory performances when the robot faces rough terrains and unknown environments. Referring animals' neural control mechanisms, a control model is built for a quadruped robot walking adaptively. The basic rhythmic motion of the robot is controlled by a well-designed rhythmic motion controller(RMC) comprising a central pattern generator(CPG) for hip joints and a rhythmic coupler (RC) for knee joints. CPG and RC have relationships of motion-mapping and rhythmic couple. Multiple sensory-motor models, abstracted from the neural reflexes of a cat, are employed. These reflex models are organized and thus interact with the CPG in three layers, to meet different requirements of complexity and response time to the tasks. On the basis of the RMC and layered biological reflexes, a quadruped robot is constructed, which can clear obstacles and walk uphill and downhill autonomously, and make a turn voluntarily in uncertain environments, interacting with the environment in a way similar to that of an animal. The paper provides a biologically inspired architecture, with which a robot can walk adaptively in uncertain environments in a simple and effective way, and achieve better performances.
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
2007-09-01
behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic
AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program
NASA Astrophysics Data System (ADS)
Gothard, Benny M.
2002-02-01
One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.
Robotic Lunar Rover Technologies and SEI Supporting Technologies at Sandia National Laboratories
NASA Technical Reports Server (NTRS)
Klarer, Paul R.
1992-01-01
Existing robotic rover technologies at Sandia National Laboratories (SNL) can be applied toward the realization of a robotic lunar rover mission in the near term. Recent activities at the SNL-RVR have demonstrated the utility of existing rover technologies for performing remote field geology tasks similar to those envisioned on a robotic lunar rover mission. Specific technologies demonstrated include low-data-rate teleoperation, multivehicle control, remote site and sample inspection, standard bandwidth stereo vision, and autonomous path following based on both internal dead reckoning and an external position location update system. These activities serve to support the use of robotic rovers for an early return to the lunar surface by demonstrating capabilities that are attainable with off-the-shelf technology and existing control techniques. The breadth of technical activities at SNL provides many supporting technology areas for robotic rover development. These range from core competency areas and microsensor fabrication facilities, to actual space qualification of flight components that are designed and fabricated in-house.
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
Inexpensive robots used to teach dc circuits and electronics
NASA Astrophysics Data System (ADS)
Sidebottom, David L.
2017-05-01
This article describes inexpensive, autonomous robots, built without microprocessors, used in a college-level introductory physics laboratory course to motivate student learning of dc circuits. Detailed circuit descriptions are provided as well as a week-by-week course plan that can guide students from elementary dc circuits, through Kirchhoff's laws, and into simple analog integrated circuits with the motivational incentive of building an autonomous robot that can compete with others in a public arena.
From wheels to wings with evolutionary spiking circuits.
Floreano, Dario; Zufferey, Jean-Christophe; Nicoud, Jean-Daniel
2005-01-01
We give an overview of the EPFL indoor flying project, whose goal is to evolve neural controllers for autonomous, adaptive, indoor micro-flyers. Indoor flight is still a challenge because it requires miniaturization, energy efficiency, and control of nonlinear flight dynamics. This ongoing project consists of developing a flying, vision-based micro-robot, a bio-inspired controller composed of adaptive spiking neurons directly mapped into digital microcontrollers, and a method to evolve such a neural controller without human intervention. This article describes the motivation and methodology used to reach our goal as well as the results of a number of preliminary experiments on vision-based wheeled and flying robots.
Robot Lies in Health Care: When Is Deception Morally Permissible?
Matthias, Andreas
2015-06-01
Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot's workings, capabilities, and internal structure. The robot's real capabilities may diverge from this mental model to the extent that one might accuse the robot's manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.
Grasping with a soft glove: intrinsic impedance control in pneumatic actuators
2017-01-01
The interaction of a robotic manipulator with unknown soft objects represents a significant challenge for traditional robotic platforms because of the difficulty in controlling the grasping force between a soft object and a stiff manipulator. Soft robotic actuators inspired by elephant trunks, octopus limbs and muscular hydrostats are suggestive of ways to overcome this fundamental difficulty. In particular, the large intrinsic compliance of soft manipulators such as ‘pneu-nets’—pneumatically actuated elastomeric structures—makes them ideal for applications that require interactions with an uncertain mechanical and geometrical environment. Using a simple theoretical model, we show how the geometric and material nonlinearities inherent in the passive mechanical response of such devices can be used to grasp soft objects using force control, and stiff objects using position control, without any need for active sensing or feedback control. Our study is suggestive of a general principle for designing actuators with autonomous intrinsic impedance control. PMID:28250097
Supervised autonomous robotic soft tissue surgery.
Shademan, Azad; Decker, Ryan S; Opfermann, Justin D; Leonard, Simon; Krieger, Axel; Kim, Peter C W
2016-05-04
The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques. Copyright © 2016, American Association for the Advancement of Science.
Real-Time Hazard Detection and Avoidance Demonstration for a Planetary Lander
NASA Technical Reports Server (NTRS)
Epp, Chirold D.; Robertson, Edward A.; Carson, John M., III
2014-01-01
The Autonomous Landing Hazard Avoidance Technology (ALHAT) Project is chartered to develop and mature to a Technology Readiness Level (TRL) of six an autonomous system combining guidance, navigation and control with terrain sensing and recognition functions for crewed, cargo, and robotic planetary landing vehicles. In addition to precision landing close to a pre-mission defined landing location, the ALHAT System must be capable of autonomously identifying and avoiding surface hazards in real-time to enable a safe landing under any lighting conditions. This paper provides an overview of the recent results of the ALHAT closed loop hazard detection and avoidance flight demonstrations on the Morpheus Vertical Testbed (VTB) at the Kennedy Space Center, including results and lessons learned. This effort is also described in the context of a technology path in support of future crewed and robotic planetary exploration missions based upon the core sensing functions of the ALHAT system: Terrain Relative Navigation (TRN), Hazard Detection and Avoidance (HDA), and Hazard Relative Navigation (HRN).
Capturing Requirements for Autonomous Spacecraft with Autonomy Requirements Engineering
NASA Astrophysics Data System (ADS)
Vassev, Emil; Hinchey, Mike
2014-08-01
The Autonomy Requirements Engineering (ARE) approach has been developed by Lero - the Irish Software Engineering Research Center within the mandate of a joint project with ESA, the European Space Agency. The approach is intended to help engineers develop missions for unmanned exploration, often with limited or no human control. Such robotics space missions rely on the most recent advances in automation and robotic technologies where autonomy and autonomic computing principles drive the design and implementation of unmanned spacecraft [1]. To tackle the integration and promotion of autonomy in software-intensive systems, ARE combines generic autonomy requirements (GAR) with goal-oriented requirements engineering (GORE). Using this approach, software engineers can determine what autonomic features to develop for a particular system (e.g., a space mission) as well as what artifacts that process might generate (e.g., goals models, requirements specification, etc.). The inputs required by this approach are the mission goals and the domain-specific GAR reflecting specifics of the mission class (e.g., interplanetary missions).
2014-08-15
CAPE CANAVERAL, Fla. – Kennedy Space Center Director and former astronaut Bob Cabana, talks to Florida middle school students and their teachers during the Zero Robotics finals competition at the center's Space Station Processing Facility in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
2014-08-15
CAPE CANAVERAL, Fla. – Kennedy Space Center Director and former astronaut Bob Cabana, talks to Florida middle school students and their teachers during the Zero Robotics finals competition at the center's Space Station Processing Facility in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
2014-08-15
CAPE CANAVERAL, Fla. – Kennedy Space Center Director and former astronaut Bob Cabana, talks to Florida middle school students and their teachers during the Zero Robotics finals competition at the center's Space Station Processing Facility in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation
Masmoudi, Mohamed Slim; Masmoudi, Mohamed
2016-01-01
This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path. PMID:27688748
Experimental Verification of Fully Decentralized Control Inspired by Plasmodium of True Slime Mold
NASA Astrophysics Data System (ADS)
Umedachi, Takuya; Takeda, Koichi; Nakagaki, Toshiyuki; Kobayashi, Ryo; Ishiguro, Akio
This paper presents a fully decentralized control inspired by plasmodium of true slime mold and its validity using a soft-bodied amoeboid robot. The notable features of this paper are twofold: (1) the robot has truly soft and deformable body stemming from real-time tunable springs and a balloon, the former is utilized as an outer skin of the body and the latter serves as protoplasm; and (2) a fully decentralized control using coupled oscillators with completely local sensory feedback mechanism is realized by exploiting the long-distance physical interaction between the body parts induced by the law of conservation of protoplasmic mass. Experimental results show that this robot exhibits truly supple locomotion without relying on any hierarchical structure. The results obtained are expected to shed new light on design scheme for autonomous decentralized control system.
Reinforcement learning for a biped robot based on a CPG-actor-critic method.
Nakamura, Yutaka; Mori, Takeshi; Sato, Masa-aki; Ishii, Shin
2007-08-01
Animals' rhythmic movements, such as locomotion, are considered to be controlled by neural circuits called central pattern generators (CPGs), which generate oscillatory signals. Motivated by this biological mechanism, studies have been conducted on the rhythmic movements controlled by CPG. As an autonomous learning framework for a CPG controller, we propose in this article a reinforcement learning method we call the "CPG-actor-critic" method. This method introduces a new architecture to the actor, and its training is roughly based on a stochastic policy gradient algorithm presented recently. We apply this method to an automatic acquisition problem of control for a biped robot. Computer simulations show that training of the CPG can be successfully performed by our method, thus allowing the biped robot to not only walk stably but also adapt to environmental changes.
Development of a neuromorphic control system for a lightweight humanoid robot
NASA Astrophysics Data System (ADS)
Folgheraiter, Michele; Keldibek, Amina; Aubakir, Bauyrzhan; Salakchinov, Shyngys; Gini, Giuseppina; Mauro Franchi, Alessio; Bana, Matteo
2017-03-01
A neuromorphic control system for a lightweight middle size humanoid biped robot built using 3D printing techniques is proposed. The control architecture consists of different modules capable to learn and autonomously reproduce complex periodic trajectories. Each module is represented by a chaotic Recurrent Neural Network (RNN) with a core of dynamic neurons randomly and sparsely connected with fixed synapses. A set of read-out units with adaptable synapses realize a linear combination of the neurons output in order to reproduce the target signals. Different experiments were conducted to find out the optimal initialization for the RNN’s parameters. From simulation results, using normalized signals obtained from the robot model, it was proven that all the instances of the control module can learn and reproduce the target trajectories with an average RMS error of 1.63 and variance 0.74.
A biologically inspired meta-control navigation system for the Psikharpax rat robot.
Caluwaerts, K; Staffa, M; N'Guyen, S; Grand, C; Dollé, L; Favre-Félix, A; Girard, B; Khamassi, M
2012-06-01
A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e.g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics.
NASA Astrophysics Data System (ADS)
Adamczyk, Peter G.; Gorsich, David J.; Hudas, Greg R.; Overholt, James
2003-09-01
The U.S. Army is seeking to develop autonomous off-road mobile robots to perform tasks in the field such as supply delivery and reconnaissance in dangerous territory. A key problem to be solved with these robots is off-road mobility, to ensure that the robots can accomplish their tasks without loss or damage. We have developed a computer model of one such concept robot, the small-scale "T-1" omnidirectional vehicle (ODV), to study the effects of different control strategies on the robot's mobility in off-road settings. We built the dynamic model in ADAMS/Car and the control system in Matlab/Simulink. This paper presents the template-based method used to construct the ADAMS model of the T-1 ODV. It discusses the strengths and weaknesses of ADAMS/Car software in such an application, and describes the benefits and challenges of the approach as a whole. The paper also addresses effective linking of ADAMS/Car and Matlab for complete control system development. Finally, this paper includes a section describing the extension of the T-1 templates to other similar ODV concepts for rapid development.
Plugin-docking system for autonomous charging using particle filter
NASA Astrophysics Data System (ADS)
Koyasu, Hiroshi; Wada, Masayoshi
2017-03-01
Autonomous charging of the robot battery is one of the key functions for the sake of expanding working areas of the robots. To realize it, most of existing systems use custom docking stations or artificial markers. By the other words, they can only charge on a few specific outlets. If the limit can be removed, working areas of the robots significantly expands. In this paper, we describe a plugin-docking system for the autonomous charging, which does not require any custom docking stations or artificial markers. A single camera is used for recognizing the 3D position of an outlet socket. A particle filter-based image tracking algorithm which is robust to the illumination change is applied. The algorithm is implemented on a robot with an omnidirectional moving system. The experimental results show the effectiveness of our system.
Navigation system for autonomous mapper robots
NASA Astrophysics Data System (ADS)
Halbach, Marc; Baudoin, Yvan
1993-05-01
This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.
A Review of Robotics in Neurorehabilitation: Towards an Automated Process for Upper Limb
Sánchez-Herrera, P.; Balaguer, C.; Jardón, A.
2018-01-01
Robot-mediated neurorehabilitation is a growing field that seeks to incorporate advances in robotics combined with neuroscience and rehabilitation to define new methods for treating problems related with neurological diseases. In this paper, a systematic literature review is conducted to identify the contribution of robotics for upper limb neurorehabilitation, highlighting its relation with the rehabilitation cycle, and to clarify the prospective research directions in the development of more autonomous rehabilitation processes. With this aim, first, a study and definition of a general rehabilitation process are made, and then, it is particularized for the case of neurorehabilitation, identifying the components involved in the cycle and their degree of interaction between them. Next, this generic process is compared with the current literature in robotics focused on upper limb treatment, analyzing which components of this rehabilitation cycle are being investigated. Finally, the challenges and opportunities to obtain more autonomous rehabilitation processes are discussed. In addition, based on this study, a series of technical requirements that should be taken into account when designing and implementing autonomous robotic systems for rehabilitation is presented and discussed. PMID:29707189
An Astronaut Assistant Rover for Martian Surface Exploration
NASA Astrophysics Data System (ADS)
1999-01-01
Lunar exploration, recent field tests, and even on-orbit operations suggest the need for a robotic assistant for an astronaut during extravehicular activity (EVA) tasks. The focus of this paper is the design of a 300-kg, 2 cubic meter, semi-autonomous robotic rover to assist astronauts during Mars surface exploration. General uses of this rover include remote teleoperated control, local EVA astronaut control, and autonomous control. Rover size, speed, sample capacity, scientific payload and dexterous fidelity were based on known Martian environmental parameters,- established National Aeronautics and Space Administration (NASA) standards, the NASA Mars Exploration Reference Mission, and lessons learned from lunar and on-orbit sorties. An assumed protocol of a geological, two astronaut EVA performed during daylight hours with a maximum duration of tour hour dictated the following design requirements: (1) autonomously follow the EVA team over astronaut traversable Martian terrain for four hours; (2) retrieve, catalog, and carry 12 kg of samples; (3) carry tools and minimal in-field scientific equipment; (4) provide contingency life support; (5) compile and store a detailed map of surrounding terrain and estimate current position with respect to base camp; (6) provide supplemental communications systems; and (7) carry and support the use of a 7 degree - of- freedom dexterous manipulator.
Resource allocation and supervisory control architecture for intelligent behavior generation
NASA Astrophysics Data System (ADS)
Shah, Hitesh K.; Bahl, Vikas; Moore, Kevin L.; Flann, Nicholas S.; Martin, Jason
2003-09-01
In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) was funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). As part of our research, we presented the use of a grammar-based approach to enabling intelligent behaviors in autonomous robotic vehicles. With the growth of the number of available resources on the robot, the variety of the generated behaviors and the need for parallel execution of multiple behaviors to achieve reaction also grew. As continuation of our past efforts, in this paper, we discuss the parallel execution of behaviors and the management of utilized resources. In our approach, available resources are wrapped with a layer (termed services) that synchronizes and serializes access to the underlying resources. The controlling agents (called behavior generating agents) generate behaviors to be executed via these services. The agents are prioritized and then, based on their priority and the availability of requested services, the Control Supervisor decides on a winner for the grant of access to services. Though the architecture is applicable to a variety of autonomous vehicles, we discuss its application on T4, a mid-sized autonomous vehicle developed for security applications.
Towards Autonomous Operation of Robonaut 2
NASA Technical Reports Server (NTRS)
Badger, Julia M.; Hart, Stephen W.; Yamokoski, J. D.
2011-01-01
The Robonaut 2 (R2) platform, as shown in Figure 1, was designed through a collaboration between NASA and General Motors to be a capable robotic assistant with the dexterity similar to a suited astronaut [1]. An R2 robot was sent to the International Space Station (ISS) in February 2011 and, in doing so, became the first humanoid robot in space. Its capabilities are presently being tested and expanded to increase its usefulness to the crew. Current work on R2 includes the addition of a mobility platform to allow the robot to complete tasks (such as cleaning, maintenance, or simple construction activities) both inside and outside of the ISS. To support these new activities, R2's software architecture is being developed to provide efficient ways of programming robust and autonomous behavior. In particular, a multi-tiered software architecture is proposed that combines principles of low-level feedback control with higher-level planners that accomplish behavioral goals at the task level given the run-time context, user constraints, the health of the system, and so on. The proposed architecture is shown in Figure 2. At the lowest-level, the resource level, there exists the various sensory and motor signals available to the system. The sensory signals for a robot such as R2 include multiple channels of force/torque data, joint or Cartesian positions calculated through the robot's proprioception, and signals derived from objects observable by its cameras.
Dynamic multisensor fusion for mobile robot navigation in an indoor environment
NASA Astrophysics Data System (ADS)
Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.
2001-10-01
In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.
NASA Astrophysics Data System (ADS)
Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi
This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.
NASA Astrophysics Data System (ADS)
Ionescu, Clara M.; Copot, Cosmin; Verellen, Dirk
2017-03-01
The purpose of this work is to integrate the concept of patient-in-the-closed-loop application with tumour treatment of cancer-diagnosed patients in remote areas. The generic closed loop control objective is effective synchronisation of the radiation focus to the movement of a lung tissue tumour during actual breathing of the patient. This is facilitated by accurate repositioning of a robotic arm manipulator, i.e. we emulate the Cyberknife Robotic Radiosurgery system. Predictive control with disturbance filter is used in this application in a minimalistic model design. Performance of the control structure is validated by means of simulation using real recorded breathing patterns from patients measured in 3D space. Latency in communication protocol is taken into account, given telerobotics involve autonomous operation of a robot interacting with a human being in different location. Our results suggest that the proposed closed loop control structure has practical potential to individualise the treatment and improves accuracy by at least 15%.
Peer-to-peer model for the area coverage and cooperative control of mobile sensor networks
NASA Astrophysics Data System (ADS)
Tan, Jindong; Xi, Ning
2004-09-01
This paper presents a novel model and distributed algorithms for the cooperation and redeployment of mobile sensor networks. A mobile sensor network composes of a collection of wireless connected mobile robots equipped with a variety of sensors. In such a sensor network, each mobile node has sensing, computation, communication, and locomotion capabilities. The locomotion ability enhances the autonomous deployment of the system. The system can be rapidly deployed to hostile environment, inaccessible terrains or disaster relief operations. The mobile sensor network is essentially a cooperative multiple robot system. This paper first presents a peer-to-peer model to define the relationship between neighboring communicating robots. Delaunay Triangulation and Voronoi diagrams are used to define the geometrical relationship between sensor nodes. This distributed model allows formal analysis for the fusion of spatio-temporal sensory information of the network. Based on the distributed model, this paper discusses a fault tolerant algorithm for autonomous self-deployment of the mobile robots. The algorithm considers the environment constraints, the presence of obstacles and the nonholonomic constraints of the robots. The distributed algorithm enables the system to reconfigure itself such that the area covered by the system can be enlarged. Simulation results have shown the effectiveness of the distributed model and deployment algorithms.
Autonomous Realtime Threat-Hunting Robot (ARTHR
DOE Office of Scientific and Technical Information (OSTI.GOV)
INL
2008-05-29
Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats.
Autonomous Realtime Threat-Hunting Robot (ARTHR
INL
2017-12-09
Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats.
NASA Astrophysics Data System (ADS)
Krotkov, Eric; Simmons, Reid; Whittaker, William
1992-02-01
This report describes progress in research on an autonomous robot for planetary exploration performed during 1991 at the Robotics Institute, Carnegie Mellon University. The report summarizes the achievements during calendar year 1991, and lists personnel and publications. In addition, it includes several papers resulting from the research. Research in 1991 focused on understanding the unique capabilities of the Ambler mechanism and on autonomous walking in rough, natural terrain. We also designed a sample acquisition system, and began to configure a successor to the Ambler.
Toward Autonomous Multi-floor Exploration: Ascending Stairway Localization and Modeling
2013-03-01
robots have traditionally been restricted to single floors of a building or outdoor areas free of abrupt elevation changes such as curbs and stairs ...solution to this problem and is motivated by the rich potential of an autonomous ground robot that can climb stairs while exploring a multi-floor...parameters of the stairways, the robot could plan a path that traverses the stairs in order to explore the frontier at other elevations that were previously
Automation and robotics technology for intelligent mining systems
NASA Technical Reports Server (NTRS)
Welsh, Jeffrey H.
1989-01-01
The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.
Anderson, Patrick L; Mahoney, Arthur W; Webster, Robert J
2017-07-01
This paper examines shape sensing for a new class of surgical robot that consists of parallel flexible structures that can be reconfigured inside the human body. Known as CRISP robots, these devices provide access to the human body through needle-sized entry points, yet can be configured into truss-like structures capable of dexterous movement and large force application. They can also be reconfigured as needed during a surgical procedure. Since CRISP robots are elastic, they will deform when subjected to external forces or other perturbations. In this paper, we explore how to combine sensor information with mechanics-based models for CRISP robots to estimate their shapes under applied loads. The end result is a shape sensing framework for CRISP robots that will enable future research on control under applied loads, autonomous motion, force sensing, and other robot behaviors.