Cooperative Autonomous Robots for Reconnaissance
2009-03-06
REPORT Cooperative Autonomous Robots for Reconnaissance 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Collaborating mobile robots equipped with WiFi ...Cooperative Autonomous Robots for Reconnaissance Report Title ABSTRACT Collaborating mobile robots equipped with WiFi transceivers are configured as a mobile...equipped with WiFi transceivers are configured as a mobile ad-hoc network. Algorithms are developed to take advantage of the distributed processing
Long-Term Simultaneous Localization and Mapping in Dynamic Environments
2015-01-01
core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the...and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot???s sensory...distributed stochastic neighbor embedding x ABSTRACT One of the core competencies required for autonomous mobile robotics is the ability to use sensors
Parallel-distributed mobile robot simulator
NASA Astrophysics Data System (ADS)
Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo
1996-06-01
The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.
Object recognition for autonomous robot utilizing distributed knowledge database
NASA Astrophysics Data System (ADS)
Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji
2003-10-01
In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.
Feasibility of Synergy-Based Exoskeleton Robot Control in Hemiplegia.
Hassan, Modar; Kadone, Hideki; Ueno, Tomoyuki; Hada, Yasushi; Sankai, Yoshiyuki; Suzuki, Kenji
2018-06-01
Here, we present a study on exoskeleton robot control based on inter-limb locomotor synergies using a robot control method developed to target hemiparesis. The robot control is based on inter-limb locomotor synergies and kinesiological information from the non-paretic leg and a walking aid cane to generate motion patterns for the assisted leg. The developed synergy-based system was tested against an autonomous robot control system in five patients with hemiparesis and varying locomotor abilities. Three of the participants were able to walk using the robot. Results from these participants showed an improved spatial symmetry ratio and more consistent step length with the synergy-based method compared with that for the autonomous method, while the increase in the range of motion for the assisted joints was larger with the autonomous system. The kinematic synergy distribution of the participants walking without the robot suggests a relationship between each participant's synergy distribution and his/her ability to control the robot: participants with two independent synergies accounting for approximately 80% of the data variability were able to walk with the robot. This observation was not consistently apparent with conventional clinical measures such as the Brunnstrom stages. This paper contributes to the field of robot-assisted locomotion therapy by introducing the concept of inter-limb synergies, demonstrating performance differences between synergy-based and autonomous robot control, and investigating the range of disability in which the system is usable.
Towards Principled Experimental Study of Autonomous Mobile Robots
NASA Technical Reports Server (NTRS)
Gat, Erann
1995-01-01
We review the current state of research in autonomous mobile robots and conclude that there is an inadequate basis for predicting the reliability and behavior of robots operating in unengineered environments. We present a new approach to the study of autonomous mobile robot performance based on formal statistical analysis of independently reproducible experiments conducted on real robots. Simulators serve as models rather than experimental surrogates. We demonstrate three new results: 1) Two commonly used performance metrics (time and distance) are not as well correlated as is often tacitly assumed. 2) The probability distributions of these performance metrics are exponential rather than normal, and 3) a modular, object-oriented simulation accurately predicts the behavior of the real robot in a statistically significant manner.
Theseus: tethered distributed robotics (TDR)
NASA Astrophysics Data System (ADS)
Digney, Bruce L.; Penzes, Steven G.
2003-09-01
The Defence Research and Development Canada's (DRDC) Autonomous Intelligent System's program conducts research to increase the independence and effectiveness of military vehicles and systems. DRDC-Suffield's Autonomous Land Systems (ALS) is creating new concept vehicles and autonomous control systems for use in outdoor areas, urban streets, urban interiors and urban subspaces. This paper will first give an overview of the ALS program and then give a specific description of the work being done for mobility in urban subspaces. Discussed will be the Theseus: Thethered Distributed Robotics (TDR) system, which will not only manage an unavoidable tether but exploit it for mobility and navigation. Also discussed will be the prototype robot called the Hedgehog, which uses conformal 3D mobility in ducts, sewer pipes, collapsed rubble voids and chimneys.
A New Simulation Framework for Autonomy in Robotic Missions
NASA Technical Reports Server (NTRS)
Flueckiger, Lorenzo; Neukom, Christian
2003-01-01
Autonomy is a key factor in remote robotic exploration and there is significant activity addressing the application of autonomy to remote robots. It has become increasingly important to have simulation tools available to test the autonomy algorithms. While indus1;rial robotics benefits from a variety of high quality simulation tools, researchers developing autonomous software are still dependent primarily on block-world simulations. The Mission Simulation Facility I(MSF) project addresses this shortcoming with a simulation toolkit that will enable developers of autonomous control systems to test their system s performance against a set of integrated, standardized simulations of NASA mission scenarios. MSF provides a distributed architecture that connects the autonomous system to a set of simulated components replacing the robot hardware and its environment.
Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2
2015-03-01
distribution is unlimited. 13. SUPPLEMENTARY NOTES DCS Corporation, Alexandria, VA 14. ABSTRACT In the past, robot operation has been a high-cognitive...increase performance and reduce perceived workload. The aids were overlays displaying what an autonomous robot perceived in the environment and the...subsequent course of action planned by the robot . Eight active-duty, US Army Soldiers completed 16 scenario missions using an operator interface
Analysis of Unmanned Systems in Military Logistics
2016-12-01
opportunities to employ unmanned systems to support logistic operations. 14. SUBJECT TERMS unmanned systems, robotics , UAVs, UGVs, USVs, UUVs, military...Industrial Robots at Warehouses / Distribution Centers .............................................................................. 17 2. Unmanned...Autonomous Robot Gun Turret. Source: Blain (2010)................................................... 33 Figure 4. Robot Sentries for Base Patrol
Framework and Method for Controlling a Robotic System Using a Distributed Computer Network
NASA Technical Reports Server (NTRS)
Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)
2015-01-01
A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.
From Autonomous Robots to Artificial Ecosystems
NASA Astrophysics Data System (ADS)
Mastrogiovanni, Fulvio; Sgorbissa, Antonio; Zaccaria, Renato
During the past few years, starting from the two mainstream fields of Ambient Intelligence [2] and Robotics [17], several authors recognized the benefits of the socalled Ubiquitous Robotics paradigm. According to this perspective, mobile robots are no longer autonomous, physically situated and embodied entities adapting themselves to a world taliored for humans: on the contrary, they are able to interact with devices distributed throughout the environment and get across heterogeneous information by means of communication technologies. Information exchange, coupled with simple actuation capabilities, is meant to replace physical interaction between robots and their environment. Two benefits are evident: (i) smart environments overcome inherent limitations of mobile platforms, whereas (ii) mobile robots offer a mobility dimension unknown to smart environments.
NASA Technical Reports Server (NTRS)
Agah, Arvin; Bekey, George A.
1994-01-01
This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.
Multirobot autonomous landmine detection using distributed multisensor information aggregation
NASA Astrophysics Data System (ADS)
Jumadinova, Janyl; Dasgupta, Prithviraj
2012-06-01
We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.
Autonomous intelligent military robots: Army ants, killer bees, and cybernetic soldiers
NASA Astrophysics Data System (ADS)
Finkelstein, Robert
The rationale for developing autonomous intelligent robots in the military is to render conventional warfare systems ineffective and indefensible. The Desert Storm operation demonstrated the effectiveness of such systems as unmanned air and ground vehicles and indicated the future possibilities of robotic technology. Robotic military vehicles would have the advantages of expendability, low cost, lower complexity compared to manned systems, survivability, maneuverability, and a capability to share in instantaneous communication and distributed processing of combat information. Basic characteristics of intelligent systems and hierarchical control systems with sensor inputs are described. Genetic algorithms are seen as a means of achieving appropriate levels of intelligence in a robotic system. Potential impacts of robotic technology in the military are outlined.
A Behavior-Based Strategy for Single and Multi-Robot Autonomous Exploration
Cepeda, Jesus S.; Chaimowicz, Luiz; Soto, Rogelio; Gordillo, José L.; Alanís-Reyes, Edén A.; Carrillo-Arce, Luis C.
2012-01-01
In this paper, we consider the problem of autonomous exploration of unknown environments with single and multiple robots. This is a challenging task, with several potential applications. We propose a simple yet effective approach that combines a behavior-based navigation with an efficient data structure to store previously visited regions. This allows robots to safely navigate, disperse and efficiently explore the environment. A series of experiments performed using a realistic robotic simulator and a real testbed scenario demonstrate that our technique effectively distributes the robots over the environment and allows them to quickly accomplish their mission in large open spaces, narrow cluttered environments, dead-end corridors, as well as rooms with minimum exits.
A small, cheap, and portable reconnaissance robot
NASA Astrophysics Data System (ADS)
Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey
2005-05-01
While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.
RoMPS concept review automatic control of space robot, volume 2
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1991-01-01
Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form and include: (1) system concept; (2) Hitchhiker Interface Requirements; (3) robot axis control concepts; (4) Autonomous Experiment Management System; (5) Zymate Robot Controller; (6) Southwest SC-4 Computer; (7) oven control housekeeping data; and (8) power distribution.
Peer-to-peer model for the area coverage and cooperative control of mobile sensor networks
NASA Astrophysics Data System (ADS)
Tan, Jindong; Xi, Ning
2004-09-01
This paper presents a novel model and distributed algorithms for the cooperation and redeployment of mobile sensor networks. A mobile sensor network composes of a collection of wireless connected mobile robots equipped with a variety of sensors. In such a sensor network, each mobile node has sensing, computation, communication, and locomotion capabilities. The locomotion ability enhances the autonomous deployment of the system. The system can be rapidly deployed to hostile environment, inaccessible terrains or disaster relief operations. The mobile sensor network is essentially a cooperative multiple robot system. This paper first presents a peer-to-peer model to define the relationship between neighboring communicating robots. Delaunay Triangulation and Voronoi diagrams are used to define the geometrical relationship between sensor nodes. This distributed model allows formal analysis for the fusion of spatio-temporal sensory information of the network. Based on the distributed model, this paper discusses a fault tolerant algorithm for autonomous self-deployment of the mobile robots. The algorithm considers the environment constraints, the presence of obstacles and the nonholonomic constraints of the robots. The distributed algorithm enables the system to reconfigure itself such that the area covered by the system can be enlarged. Simulation results have shown the effectiveness of the distributed model and deployment algorithms.
AN OFFSET FOR AFSOF: COMBINING ADDITIVE MANUFACTURING AND AUTONOMOUS SYSTEMS WITH SWARM EMPLOYMENT
2016-10-01
teams composed of autonomous robot players compete in games of soccer .58 Strongly coordinated centralized systems are similar to the distributed...goal in a dynamically changing environment. This is a very active area of research and exemplified by the robot soccer league, a competition where...University, 2013, 23. 63 Massie, Andrew. “Autonomy and the Future Force” Strategic Studies Quarterly, Summer 2016, 146. 64 Zacharias, Greg. "Autonomus
Information Foraging and Change Detection for Automated Science Exploration
NASA Technical Reports Server (NTRS)
Furlong, P. Michael; Dille, Michael
2016-01-01
This paper presents a new algorithm for autonomous on-line exploration in unknown environments. The objective is to free remote scientists from possibly-infeasible extensive preliminary site investigation prior to sending robotic agents. We simulate a common exploration task for an autonomous robot sampling the environment at various locations and compare performance against simpler control strategies. An extension is proposed and evaluated that further permits operation in the presence of environmental variability in which the robot encounters a change in the distribution underlying sampling targets. Experimental results indicate a strong improvement in performance across varied parameter choices for the scenario.
Autonomous intelligent assembly systems LDRD 105746 final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2013-04-01
This report documents a three-year to develop technology that enables mobile robots to perform autonomous assembly tasks in unstructured outdoor environments. This is a multi-tier problem that requires an integration of a large number of different software technologies including: command and control, estimation and localization, distributed communications, object recognition, pose estimation, real-time scanning, and scene interpretation. Although ultimately unsuccessful in achieving a target brick stacking task autonomously, numerous important component technologies were nevertheless developed. Such technologies include: a patent-pending polygon snake algorithm for robust feature tracking, a color grid algorithm for uniquely identification and calibration, a command and control frameworkmore » for abstracting robot commands, a scanning capability that utilizes a compact robot portable scanner, and more. This report describes this project and these developed technologies.« less
Distributing Planning and Control for Teams of Cooperating Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
2004-07-19
This CRADA project involved the cooperative research of investigators in ORNL's Center for Engineering Science Advanced Research (CESAR) with researchers at Caterpillar, Inc. The subject of the research was the development of cooperative control strategies for autonomous vehicles performing applications of interest to Caterpillar customers. The project involved three Phases of research, conducted over the time period of November 1998 through December 2001. This project led to the successful development of several technologies and demonstrations in realistic simulation that illustrated the effectiveness of our control approaches for distributed planning and cooperation in multi-robot teams. The primary objectives of this researchmore » project were to: (1) Develop autonomous control technologies to enable multiple vehicles to work together cooperatively, (2) Provide the foundational capabilities for a human operator to exercise oversight and guidance during the multi-vehicle task execution, and (3) Integrate these capabilities to the ALLIANCE-based autonomous control approach for multi-robot teams. These objectives have been successfully met with the results implemented and demonstrated in a near real-time multi-vehicle simulation of up to four vehicles performing mission-relevant tasks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
HENSINGER, DAVID M.; JOHNSTON, GABRIEL A.; HINMAN-SWEENEY, ELAINE M.
2002-10-01
A distributed reconfigurable micro-robotic system is a collection of unlimited numbers of distributed small, homogeneous robots designed to autonomously organize and reorganize in order to achieve mission-specified geometric shapes and functions. This project investigated the design, control, and planning issues for self-configuring and self-organizing robots. In the 2D space a system consisting of two robots was prototyped and successfully displayed automatic docking/undocking to operate dependently or independently. Additional modules were constructed to display the usefulness of a self-configuring system in various situations. In 3D a self-reconfiguring robot system of 4 identical modules was built. Each module connects to its neighborsmore » using rotating actuators. An individual component can move in three dimensions on its neighbors. We have also built a self-reconfiguring robot system consisting of 9-module Crystalline Robot. Each module in this robot is actuated by expansion/contraction. The system is fully distributed, has local communication (to neighbors) capabilities and it has global sensing capabilities.« less
Integration of Hierarchical Goal Network Planning and Autonomous Path Planning
2016-03-01
Conference on Robotics and Automation (ICRA); 2010 May 3– 7; Anchorage, AK. p. 2902–2908. 4. Ayan NF, Kuter U, Yaman F, Goldman RP. Hotride...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Automated planning has...world robotic systems. This report documents work to integrate a hierarchical goal network planning algorithm with low-level path planning. The system
Autonomy in Materials Research: A Case Study in Carbon Nanotube Growth (Postprint)
2016-10-21
built an Autonomous Research System (ARES)—an autonomous research robot capable of first-of-its-kind closed-loop iterative materials experimentation...ARES exploits advances in autonomous robotics , artificial intelligence, data sciences, and high-throughput and in situ techniques, and is able to...roles of humans and autonomous research robots , and for human-machine partnering. We believe autonomous research robots like ARES constitute a
NASA Astrophysics Data System (ADS)
Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques
2005-06-01
The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.
Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue
NASA Technical Reports Server (NTRS)
Zornetzer, Steve; Gage, Douglas
2005-01-01
Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.
Development of autonomous eating mechanism for biomimetic robots
NASA Astrophysics Data System (ADS)
Jeong, Kil-Woong; Cho, Ik-Jin; Lee, Yun-Jung
2005-12-01
Most of the recently developed robots are human friendly robots which imitate animals or humans such as entertainment robot, bio-mimetic robot and humanoid robot. Interest for these robots are being increased because the social trend is focused on health, welfare, and graying. Autonomous eating functionality is most unique and inherent behavior of pets and animals. Most of entertainment robots and pet robots make use of internal-type battery. Entertainment robots and pet robots with internal-type battery are not able to operate during charging the battery. Therefore, if a robot has an autonomous function for eating battery as its feeds, the robot is not only able to operate during recharging energy but also become more human friendly like pets. Here, a new autonomous eating mechanism was introduced for a biomimetic robot, called ELIRO-II(Eating LIzard RObot version 2). The ELIRO-II is able to find a food (a small battery), eat and evacuate by itself. This work describe sub-parts of the developed mechanism such as head-part, mouth-part, and stomach-part. In addition, control system of autonomous eating mechanism is described.
Mapping planetary caves with an autonomous, heterogeneous robot team
NASA Astrophysics Data System (ADS)
Husain, Ammar; Jones, Heather; Kannan, Balajee; Wong, Uland; Pimentel, Tiago; Tang, Sarah; Daftry, Shreyansh; Huber, Steven; Whittaker, William L.
Caves on other planetary bodies offer sheltered habitat for future human explorers and numerous clues to a planet's past for scientists. While recent orbital imagery provides exciting new details about cave entrances on the Moon and Mars, the interiors of these caves are still unknown and not observable from orbit. Multi-robot teams offer unique solutions for exploration and modeling subsurface voids during precursor missions. Robot teams that are diverse in terms of size, mobility, sensing, and capability can provide great advantages, but this diversity, coupled with inherently distinct low-level behavior architectures, makes coordination a challenge. This paper presents a framework that consists of an autonomous frontier and capability-based task generator, a distributed market-based strategy for coordinating and allocating tasks to the different team members, and a communication paradigm for seamless interaction between the different robots in the system. Robots have different sensors, (in the representative robot team used for testing: 2D mapping sensors, 3D modeling sensors, or no exteroceptive sensors), and varying levels of mobility. Tasks are generated to explore, model, and take science samples. Based on an individual robot's capability and associated cost for executing a generated task, a robot is autonomously selected for task execution. The robots create coarse online maps and store collected data for high resolution offline modeling. The coordination approach has been field tested at a mock cave site with highly-unstructured natural terrain, as well as an outdoor patio area. Initial results are promising for applicability of the proposed multi-robot framework to exploration and modeling of planetary caves.
A Biologically Inspired Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)
2002-01-01
A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
A Stigmergic Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.
2004-01-01
In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
Coordinating teams of autonomous vehicles: an architectural perspective
NASA Astrophysics Data System (ADS)
Czichon, Cary; Peterson, Robert W.; Mettala, Erik G.; Vondrak, Ivo
2005-05-01
In defense-related robotics research, a mission level integration gap exists between mission tasks (tactical) performed by ground, sea, or air applications and elementary behaviors enacted by processing, communications, sensors, and weaponry resources (platform specific). The gap spans ensemble (heterogeneous team) behaviors, automatic MOE/MOP tracking, and tactical task modeling/simulation for virtual and mixed teams comprised of robotic and human combatants. This study surveys robotic system architectures, compares approaches for navigating problem/state spaces by autonomous systems, describes an architecture for an integrated, repository-based modeling, simulation, and execution environment, and outlines a multi-tiered scheme for robotic behavior components that is agent-based, platform-independent, and extendable via plug-ins. Tools for this integrated environment, along with a distributed agent framework for collaborative task performance are being developed by a U.S. Army funded SBIR project (RDECOM Contract N61339-04-C-0005).
Distance-Based Behaviors for Low-Complexity Control in Multiagent Robotics
NASA Astrophysics Data System (ADS)
Pierpaoli, Pietro
Several biological examples show that living organisms cooperate to collectively accomplish tasks impossible for single individuals. More importantly, this coordination is often achieved with a very limited set of information. Inspired by these observations, research on autonomous systems has focused on the development of distributed control techniques for control and guidance of groups of autonomous mobile agents, or robots. From an engineering perspective, when coordination and cooperation is sought in large ensembles of robotic vehicles, a reduction in hardware and algorithms' complexity becomes mandatory from the very early stages of the project design. The research for solutions capable of lowering power consumption, cost and increasing reliability are thus worth investigating. In this work, we studied low-complexity techniques to achieve cohesion and control on swarms of autonomous robots. Starting from an inspiring example with two-agents, we introduced effects of neighbors' relative positions on control of an autonomous agent. The extension of this intuition addressed the control of large ensembles of autonomous vehicles, and was applied in the form of a herding-like technique. To this end, a low-complexity distance-based aggregation protocol was defined. We first showed that our protocol produced a cohesion aggregation among the agent while avoiding inter-agent collisions. Then, a feedback leader-follower architecture was introduced for the control of the swarm. We also described how proximity measures and probability of collisions with neighbors can also be used as source of information in highly populated environments.
Controlling multiple security robots in a warehouse environment
NASA Technical Reports Server (NTRS)
Everett, H. R.; Gilbreath, G. A.; Heath-Pastore, T. A.; Laird, R. T.
1994-01-01
The Naval Command Control and Ocean Surveillance Center (NCCOSC) has developed an architecture to provide coordinated control of multiple autonomous vehicles from a single host console. The multiple robot host architecture (MRHA) is a distributed multiprocessing system that can be expanded to accommodate as many as 32 robots. The initial application will employ eight Cybermotion K2A Navmaster robots configured as remote security platforms in support of the Mobile Detection Assessment and Response System (MDARS) Program. This paper discusses developmental testing of the MRHA in an operational warehouse environment, with two actual and four simulated robotic platforms.
Task-level control for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid
1994-01-01
Task-level control refers to the integration and coordination of planning, perception, and real-time control to achieve given high-level goals. Autonomous mobile robots need task-level control to effectively achieve complex tasks in uncertain, dynamic environments. This paper describes the Task Control Architecture (TCA), an implemented system that provides commonly needed constructs for task-level control. Facilities provided by TCA include distributed communication, task decomposition and sequencing, resource management, monitoring and exception handling. TCA supports a design methodology in which robot systems are developed incrementally, starting first with deliberative plans that work in nominal situations, and then layering them with reactive behaviors that monitor plan execution and handle exceptions. To further support this approach, design and analysis tools are under development to provide ways of graphically viewing the system and validating its behavior.
Autonomous robot software development using simple software components
NASA Astrophysics Data System (ADS)
Burke, Thomas M.; Chung, Chan-Jin
2004-10-01
Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.
Cooperative crossing of traffic intersections in a distributed robot system
NASA Astrophysics Data System (ADS)
Rausch, Alexander; Oswald, Norbert; Levi, Paul
1995-09-01
In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.
NASA Astrophysics Data System (ADS)
Singh, Surya P. N.; Thayer, Scott M.
2002-02-01
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.
[Mobile autonomous robots-Possibilities and limits].
Maehle, E; Brockmann, W; Walthelm, A
2002-02-01
Besides industrial robots, which today are firmly established in production processes, service robots are becoming more and more important. They shall provide services for humans in different areas of their professional and everyday environment including medicine. Most of these service robots are mobile which requires an intelligent autonomous behaviour. After characterising the different kinds of robots the relevant paradigms of intelligent autonomous behaviour for mobile robots are critically discussed in this paper and illustrated by three concrete examples of robots realized in Lübeck. In addition a short survey of actual kinds of surgical robots as well as an outlook to future developments is given.
TARDEC Overview: Ground Vehicle Power and Mobility
2011-02-04
Fuel & Water Distribution • Force Sustainment • Construction Equipment • Bridging • Assured Mobility Systems Robotics • TALON • PackBot • MARCbot...Equipment • Mechanical Countermine Equipment • Tactical Bridging Intelligent Ground Systems • Autonomous Robotics Systems • Safe Operations...Test Cell • Hybrid Electric Reconfigurable Moveable Integration Testbed (HERMIT) • Electro-chemical Analysis and Research Lab (EARL) • Battery Lab • Air
Control of autonomous robot using neural networks
NASA Astrophysics Data System (ADS)
Barton, Adam; Volna, Eva
2017-07-01
The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.
DEMONSTRATION OF AUTONOMOUS AIR MONITORING THROUGH ROBOTICS
This project included modifying an existing teleoperated robot to include autonomous navigation, large object avoidance, and air monitoring and demonstrating that prototype robot system in indoor and outdoor environments. An existing teleoperated "Surveyor" robot developed by ARD...
JOMAR: Joint Operations with Mobile Autonomous Robots
2015-12-21
AFRL-AFOSR-JP-TR-2015-0009 JOMAR: Joint Operations with Mobile Autonomous Robots Edwin Olson UNIVERSITY OF MICHIGAN Final Report 12/21/2015...SUBTITLE JOMAR: Joint Operations with Mobile Autonomous Robots 5a. CONTRACT NUMBER FA23861114024 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...14. ABSTRACT Under this grant, we formulated and implemented a variety of novel algorithms that address core problems in multi- robot systems. These
Material handling robot system for flow-through storage applications
NASA Astrophysics Data System (ADS)
Dill, James F.; Candiloro, Brian; Downer, James; Wiesman, Richard; Fallin, Larry; Smith, Ron
1999-01-01
This paper describes the design, development and planned implementation of a system of mobile robots for use in flow through storage applications. The robots are being designed with on-board embedded controls so that they can perform their tasks as semi-autonomous workers distributed within a centrally controlled network. On the storage input side, boxes will be identified by bar-codes and placed into preassigned flow through bins. On the shipping side, orders will be forwarded to the robots from a central order processing station and boxes will be picked from designated storage bins following proper sequencing to permit direct loading into trucks for shipping. Because of the need to maintain high system availability, a distributed control strategy has been selected. When completed, the system will permit robots to be dynamically reassigned responsibilities if an individual unit fails. On-board health diagnostics and condition monitoring will be used to maintain high reliability of the units.
Sample Return Robot Centennial Challenge
2012-06-16
A judge for the NASA-WPI Sample Return Robot Centennial Challenge follows a robot on the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Doroodgar, Barzin; Liu, Yugang; Nejat, Goldie
2014-12-01
Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing a human operator to cooperate and share such tasks with a rescue robot as navigation, exploration, and victim identification. In this paper, we present a unique hierarchical reinforcement learning-based semi-autonomous control architecture for rescue robots operating in cluttered and unknown urban search and rescue (USAR) environments. The aim of the controller is to enable a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A direction-based exploration technique is integrated in the controller to expand the search area of the robot via the classification of regions and the rubble piles within these regions. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed HRL-based semi-autonomous controller to unknown cluttered scenes with different sizes and varying types of configurations.
Mobile Robot Designed with Autonomous Navigation System
NASA Astrophysics Data System (ADS)
An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin
2017-10-01
With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.
1992-10-29
These people try to make their robotic vehicle as intelligent and autonomous as possible with the current state of technology. The robot only interacts... Robotics Peter J. Burt David Sarnoff Research Center Princeton, NJ 08543-5300 U.S.A. The ability of an operator to drive a remotely piloted vehicle depends...RESUPPLY - System which can rapidly and autonomously load and unload palletized ammunition. (18) AUTONOMOUS COMBAT EVACUATION VEHICLE - Robotic arms
2017-06-01
FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...June 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW TECHNIQUE FOR ROBOT VISION IN AUTONOMOUS UNDERWATER...Developing a technique for underwater robot vision is a key factor in establishing autonomy in underwater vehicles. A new technique is developed and
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Kirchner, Frank; Spenneberg, Dirk; Starman, Jared; Hanratty, James; Kovsmeyer, David (Technical Monitor)
2003-01-01
NASA needs autonomous robotic exploration of difficult (rough and/or steep) scientifically interesting Martian terrains. Concepts involving distributed autonomy for cooperative robotic exploration are key to enabling new scientific objectives in robotic missions. We propose to utilize a legged robot as an adjunct scout to a rover for access to difficult - scientifically interesting - terrains (rocky areas, slopes, cliffs). Our final mission scenario involves the Ames rover platform "K9" and Scorpion acting together to explore a steep cliff, with the Scorpion robot rappelling down using the K9 as an anchor as well as mission planner and executive. Cooperation concepts, including wheeled rappelling robots have been proposed before. Now we propose to test the combined advantages of a wheeled vehicle with a legged scout as well as the advantages of merging of high level planning and execution with biologically inspired, behavior based robotics. We propose to use the 8-legged, multifunctional autonomous robot platform Scorpion that is currently capable of: Walking on different terrains (rocks, sand, grass, ...). Perceiving its environment and modifying its behavioral pattern accordingly. These capabilities would be extended to enable the Scorpion to: communicate and cooperate with a partner robot; climb over rocks, rubble piles, and objects with structural features. This will be done in the context of exploration of rough terrains in the neighborhood of the rover, but inaccessible to it, culminating in the added capability of rappelling down a steep cliff for both vertical and horizontal terrain observation.
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.
Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning
2018-03-16
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.
Tele-assistance for semi-autonomous robots
NASA Technical Reports Server (NTRS)
Rogers, Erika; Murphy, Robin R.
1994-01-01
This paper describes a new approach in semi-autonomous mobile robots. In this approach the robot has sufficient computerized intelligence to function autonomously under a certain set of conditions, while the local system is a cooperative decision making unit that combines human and machine intelligence. Communication is then allowed to take place in a common mode and in a common language. A number of exception-handling scenarios that were constructed as a result of experiments with actual sensor data collected from two mobile robots were presented.
Mamdani Fuzzy System for Indoor Autonomous Mobile Robot
NASA Astrophysics Data System (ADS)
Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.
2011-06-01
Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.
Sample Return Robot Centennial Challenge
2012-06-15
University of Waterloo (Canada) Robotics Team members test their robot on the practice field one day prior to the NASA-WPI Sample Return Robot Centennial Challenge, Friday, June 15, 2012 at the Worcester Polytechnic Institute in Worcester, Mass. Teams will compete for a $1.5 million NASA prize to build an autonomous robot that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-14
A University of Waterloo Robotics Team member tests their robot on the practice field two days prior to the NASA-WPI Sample Return Robot Centennial Challenge, Thursday, June 14, 2012 at the Worcester Polytechnic Institute in Worcester, Mass. Teams will compete for a $1.5 million NASA prize to build an autonomous robot that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Autonomous surgical robotics using 3-D ultrasound guidance: feasibility study.
Whitman, John; Fronheiser, Matthew P; Ivancevich, Nikolas M; Smith, Stephen W
2007-10-01
The goal of this study was to test the feasibility of using a real-time 3D (RT3D) ultrasound scanner with a transthoracic matrix array transducer probe to guide an autonomous surgical robot. Employing a fiducial alignment mark on the transducer to orient the robot's frame of reference and using simple thresholding algorithms to segment the 3D images, we tested the accuracy of using the scanner to automatically direct a robot arm that touched two needle tips together within a water tank. RMS measurement error was 3.8% or 1.58 mm for an average path length of 41 mm. Using these same techniques, the autonomous robot also performed simulated needle biopsies of a cyst-like lesion in a tissue phantom. This feasibility study shows the potential for 3D ultrasound guidance of an autonomous surgical robot for simple interventional tasks, including lesion biopsy and foreign body removal.
Sample Return Robot Centennial Challenge
2012-06-15
Intrepid Systems robot, foreground, and the University of Waterloo (Canada) robot, take to the practice field on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Robot teams will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Path planning in GPS-denied environments via collective intelligence of distributed sensor networks
NASA Astrophysics Data System (ADS)
Jha, Devesh K.; Chattopadhyay, Pritthi; Sarkar, Soumik; Ray, Asok
2016-05-01
This paper proposes a framework for reactive goal-directed navigation without global positioning facilities in unknown dynamic environments. A mobile sensor network is used for localising regions of interest for path planning of an autonomous mobile robot. The underlying theory is an extension of a generalised gossip algorithm that has been recently developed in a language-measure-theoretic setting. The algorithm has been used to propagate local decisions of target detection over a mobile sensor network and thus, it generates a belief map for the detected target over the network. In this setting, an autonomous mobile robot may communicate only with a few mobile sensing nodes in its own neighbourhood and localise itself relative to the communicating nodes with bounded uncertainties. The robot makes use of the knowledge based on the belief of the mobile sensors to generate a sequence of way-points, leading to a possible goal. The estimated way-points are used by a sampling-based motion planning algorithm to generate feasible trajectories for the robot. The proposed concept has been validated by numerical simulation on a mobile sensor network test-bed and a Dubin's car-like robot.
Immobile Robots: AI in the New Millennium
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Nayak, P. Pandurang
1996-01-01
A new generation of sensor rich, massively distributed, autonomous systems are being developed that have the potential for profound social, environmental, and economic change. These include networked building energy systems, autonomous space probes, chemical plant control systems, satellite constellations for remote ecosystem monitoring, power grids, biosphere-like life support systems, and reconfigurable traffic systems, to highlight but a few. To achieve high performance, these immobile robots (or immobots) will need to develop sophisticated regulatory and immune systems that accurately and robustly control their complex internal functions. To accomplish this, immobots will exploit a vast nervous system of sensors to model themselves and their environment on a grand scale. They will use these models to dramatically reconfigure themselves in order to survive decades of autonomous operations. Achieving these large scale modeling and configuration tasks will require a tight coupling between the higher level coordination function provided by symbolic reasoning, and the lower level autonomic processes of adaptive estimation and control. To be economically viable they will need to be programmable purely through high level compositional models. Self modeling and self configuration, coordinating autonomic functions through symbolic reasoning, and compositional, model-based programming are the three key elements of a model-based autonomous systems architecture that is taking us into the New Millennium.
Distributed Planning and Control for Teams of Cooperating Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
2004-06-15
This CRADA project involved the cooperative research of investigators in ORNL's Center for Engineering Science Advanced Research (CESAR) with researchers at Caterpillar, Inc. The subject of the research was the development of cooperative control strategies for autonomous vehicles performing applications of interest to Caterpillar customers. The project involved three Phases of research, conducted over the time period of November 1998 through December 2001. This project led to the successful development of several technologies and demonstrations in realistic simulation that illustrated the effectiveness of the control approaches for distributed planning and cooperation in multi-robot teams.
Sample Return Robot Centennial Challenge
2012-06-16
NASA Deputy Administrator Lori Garver, left, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
NASA Deputy Administrator Lori Garver, right, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling Autonomous Robots
2009-09-01
To Appear in IEEE Robotics and Automation Magazine PREPRINT 1 Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling...Introduction We present a strategy for controlling autonomous robots that is based on principles of neuromodulation in the mammalian brain...object, ignore irrelevant distractions, and respond quickly and appropriately to the event [1]. There are separate neuromodulators that alter responses to
Sample Return Robot Centennial Challenge
2012-06-15
Intrepid Systems robot "MXR - Mark's Exploration Robot" takes to the practice field and tries to capture the white object in the foreground on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Intrepid Systems' robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
Children visiting the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event try to catch basketballs being thrown by a robot from FIRST Robotics at Burncoat High School (Mass.) on Saturday, June 16, 2012 at WPI in Worcester, Mass. The TouchTomorrow event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Spatial abstraction for autonomous robot navigation.
Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon
2015-09-01
Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.
Micro-Power Sources Enabling Robotic Outpost Based Deep Space Exploration
NASA Technical Reports Server (NTRS)
West, W. C.; Whitacre, J. F.; Ratnakumar, B. V.; Brandon, E. J.; Studor, G. F.
2001-01-01
Robotic outpost based exploration represents a fundamental shift in mission design from conventional, single spacecraft missions towards a distributed risk approach with many miniaturized semi-autonomous robots and sensors. This approach can facilitate wide-area sampling and exploration, and may consist of a web of orbiters, landers, or penetrators. To meet the mass and volume constraints of deep space missions such as the Europa Ocean Science Station, the distributed units must be fully miniaturized to fully leverage the wide-area exploration approach. However, presently there is a dearth of available options for powering these miniaturized sensors and robots. This group is currently examining miniaturized, solid state batteries as candidates to meet the demand of applications requiring low power, mass, and volume micro-power sources. These applications may include powering microsensors, battery-backing rad-hard CMOS memory and providing momentary chip back-up power. Additional information is contained in the original extended abstract.
Sample Return Robot Centennial Challenge
2012-06-16
"Harry" a Goldendoodle is seen wearing a NASA backpack during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
Team members of "Survey" drive their robot around the campus on Saturday, June 16, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The Survey team was one of the final teams participating in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-15
Wunderkammer Laboratory Team leader Jim Rothrock, left, answers questions from 8th grade Sullivan Middle School (Mass.) students about his robot named "Cerberus" on Friday, June 15, 2012, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Rothrock's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
Sample Return Robot Centennial Challenge
2012-06-16
Intrepid Systems Team member Mark Curry, left, talks with NASA Deputy Administrator Lori Garver and NASA Chief Technologist Mason Peck, right, about his robot named "MXR - Mark's Exploration Robot" on Saturday, June 16, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Curry's robot team was one of the final teams participating in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-15
Intrepid Systems Team member Mark Curry, right, answers questions from 8th grade Sullivan Middle School (Mass.) students about his robot named "MXR - Mark's Exploration Robot" on Friday, June 15, 2012, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Curry's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles
NASA Technical Reports Server (NTRS)
Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.
2001-01-01
The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.
NASA Astrophysics Data System (ADS)
Shatravin, V.; Shashev, D. V.
2018-05-01
Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.
Women Warriors: Why the Robotics Revolution Changes the Combat Equation
2016-03-01
combat. U.S. Army RDECOM PRISM 6, no. 1 FEATURES | 91 Women Warriors Why the Robotics Revolution Changes the Combat Equation1 BY LINELL A. LETENDRE...underappreciated—fac- tor is poised to alter the women in combat debate: the revolution in robotics and autonomous systems. The technology leap afforded by...developing robotic and autonomous systems and their potential impact on the future of combat. Revolution in Robotics: A Changing Battlefield20 The
Sample Return Robot Centennial Challenge
2012-06-16
Posters for the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event are seen posted around the campus on Saturday, June 16, 2012 at WPI in Worcester, Mass. The TouchTomorrow event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
Panoramic of some of the exhibits available on the campus of the Worcester Polytechnic Institute (WPI) during their "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Anthony Shrout)
micROS: a morphable, intelligent and collective robot operating system.
Yang, Xuejun; Dai, Huadong; Yi, Xiaodong; Wang, Yanzhen; Yang, Shaowu; Zhang, Bo; Wang, Zhiyuan; Zhou, Yun; Peng, Xuefeng
2016-01-01
Robots are developing in much the same way that personal computers did 40 years ago, and robot operating system is the critical basis. Current robot software is mainly designed for individual robots. We present in this paper the design of micROS, a morphable, intelligent and collective robot operating system for future collective and collaborative robots. We first present the architecture of micROS, including the distributed architecture for collective robot system as a whole and the layered architecture for every single node. We then present the design of autonomous behavior management based on the observe-orient-decide-act cognitive behavior model and the design of collective intelligence including collective perception, collective cognition, collective game and collective dynamics. We also give the design of morphable resource management, which first categorizes robot resources into physical, information, cognitive and social domains, and then achieve morphability based on self-adaptive software technology. We finally deploy micROS on NuBot football robots and achieve significant improvement in real-time performance.
Research on Self-Reconfigurable Modular Robot System
NASA Astrophysics Data System (ADS)
Kamimura, Akiya; Murata, Satoshi; Yoshida, Eiichi; Kurokawa, Haruhisa; Tomita, Kohji; Kokaji, Shigeru
Growing complexity of artificial systems arises reliability and flexibility issues of large system design. Robots are not exception of this, and many attempts have been made to realize reliable and flexible robot systems. Distributed modular composition of robot is one of the most effective approaches to attain such abilities and has a potential to adapt to its surroundings by changing its configuration autonomously according to information of surroundings. In this paper, we propose a novel three-dimensional self-reconfigurable robotic module. Each module has a very simple structure that consists of two semi-cylindrical parts connected by a link. The modular system is capable of not only building static structure but also generating dynamic robotic motion. We present details of the mechanical/electrical design of the developed module and its control system architecture. Experiments using ten modules with centralized control demonstrate robotic configuration change, crawling locomotion and three types of quadruped locomotion.
Ascending Stairway Modeling: A First Step Toward Autonomous Multi-Floor Exploration
2012-10-01
Many robotics platforms are capable of ascending stairways, but all existing approaches for autonomous stair climbing use stairway detection as a...the rich potential of an autonomous ground robot that can climb stairs while exploring a multi-floor building. Our proposed solution to this problem is...over several steps. However, many ground robots are not capable of traversing tight spiral stairs , and so we do not focus on these types. The stairway is
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU
Dou, Lihua; Su, Zhong; Liu, Ning
2018-01-01
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515
Planning Flight Paths of Autonomous Aerobots
NASA Technical Reports Server (NTRS)
Kulczycki, Eric; Elfes, Alberto; Sharma, Shivanjli
2009-01-01
Algorithms for planning flight paths of autonomous aerobots (robotic blimps) to be deployed in scientific exploration of remote planets are undergoing development. These algorithms are also adaptable to terrestrial applications involving robotic submarines as well as aerobots and other autonomous aircraft used to acquire scientific data or to perform surveying or monitoring functions.
A task control architecture for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid; Mitchell, Tom
1990-01-01
An architecture is presented for controlling robots that have multiple tasks, operate in dynamic domains, and require a fair degree of autonomy. The architecture is built on several layers of functionality, including a distributed communication layer, a behavior layer for querying sensors, expanding goals, and executing commands, and a task level for managing the temporal aspects of planning and achieving goals, coordinating tasks, allocating resources, monitoring, and recovering from errors. Application to a legged planetary rover and an indoor mobile manipulator is described.
NASA Astrophysics Data System (ADS)
Schubert, Oliver J.; Tolle, Charles R.
2004-09-01
Over the last decade the world has seen numerous autonomous vehicle programs. Wheels and track designs are the basis for many of these vehicles. This is primarily due to four main reasons: a vast preexisting knowledge base for these designs, energy efficiency of power sources, scalability of actuators, and the lack of control systems technologies for handling alternate highly complex distributed systems. Though large efforts seek to improve the mobility of these vehicles, many limitations still exist for these systems within unstructured environments, e.g. limited mobility within industrial and nuclear accident sites where existing plant configurations have been extensively changed. These unstructured operational environments include missions for exploration, reconnaissance, and emergency recovery of objects within reconfigured or collapsed structures, e.g. bombed buildings. More importantly, these environments present a clear and present danger for direct human interactions during the initial phases of recovery operations. Clearly, the current classes of autonomous vehicles are incapable of performing in these environments. Thus the next generation of designs must include highly reconfigurable and flexible autonomous robotic platforms. This new breed of autonomous vehicles will be both highly flexible and environmentally adaptable. Presented in this paper is one of the most successful designs from nature, the snake-eel-worm (SEW). This design implements shape memory alloy (SMA) actuators which allow for scaling of the robotic SEW designs from sub-micron scale to heavy industrial implementations without major conceptual redesigns as required in traditional hydraulic, pneumatic, or motor driven systems. Autonomous vehicles based on the SEW design posses the ability to easily move between air based environments and fluid based environments with limited or no reconfiguration. Under a SEW designed vehicle, one not only achieves vastly improved maneuverability within a highly unstructured environment, but also gains robotic manipulation abilities, normally relegated as secondary add-ons within existing vehicles, all within one small condensed package. The prototype design presented includes a Beowulf style computing system for advanced guidance calculations and visualization computations. All of the design and implementation pertaining to the SEW robot discussed in this paper is the product of a student team under the summer fellowship program at the DOEs INEEL.
Sample Return Robot Centennial Challenge
2012-06-16
Visitors, some with their dogs, line up to make their photo inside a space suit exhibit during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
2016-01-01
satisfying journeys in my life. I would like to thank Ryan for his guidance through the truly exciting world of mobile robotics and robotic perception. Thank...Multi-session and Multi-robot SLAM . . . . . . . . . . . . . . . 15 1.3.3 Robust Techniques for SLAM Backends . . . . . . . . . . . . . . 18 1.4 A...sonar. xv CHAPTER 1 Introduction 1.1 The Importance of SLAM in Autonomous Robotics Autonomous mobile robots are becoming a promising aid in a wide
Sample Return Robot Centennial Challenge
2012-06-16
The bronze statue of the goat mascot for Worcester Polytechnic Institute (WPI) named "Gompei" is seen wearing a staff t-shirt for the "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
How to make an autonomous robot as a partner with humans: design approach versus emergent approach.
Fujita, M
2007-01-15
In this paper, we discuss what factors are important to realize an autonomous robot as a partner with humans. We believe that it is important to interact with people without boring them, using verbal and non-verbal communication channels. We have already developed autonomous robots such as AIBO and QRIO, whose behaviours are manually programmed and designed. We realized, however, that this design approach has limitations; therefore we propose a new approach, intelligence dynamics, where interacting in a real-world environment using embodiment is considered very important. There are pioneering works related to this approach from brain science, cognitive science, robotics and artificial intelligence. We assert that it is important to study the emergence of entire sets of autonomous behaviours and present our approach towards this goal.
Interaction dynamics of multiple autonomous mobile robots in bounded spatial domains
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.
Sample Return Robot Centennial Challenge
2012-06-15
SpacePRIDE Team members Chris Williamson, right, and Rob Moore, second from right, answer questions from 8th grade Sullivan Middle School (Mass.) students about their robot on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. SpacePRIDE's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Machine intelligence and autonomy for aerospace systems
NASA Technical Reports Server (NTRS)
Heer, Ewald (Editor); Lum, Henry (Editor)
1988-01-01
The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.
Fall 2014 SEI Research Review Edge-Enabled Tactical Systems (EETS)
2014-10-29
Effective communicate and reasoning despite connectivity issues • More generally, how to make programming distributed algorithms with extensible...distributed collaboration in VREP simulations for 5-12 quadcopters and ground robots • Open-source middleware and algorithms released to community...Integration into CMU Drone-RK quadcopter and Platypus autonomous boat platforms • Presentations at DARPA (CODE), AFRL C4I Workshop, and AFRL Eglin
Equipment Proposal for the Autonomous Vehicle Systems Laboratory at UIW
2015-04-29
testing, 5) 38 Lego Mindstorm EV3 and Hitechnic Sensors for use in feedback control and autonomous systems for STEM undergraduate and High School...autonomous robots using the Lego Mindstorm EV3. This robotics workshop will be used as a pilot study for next summer when more High School students
Autonomous Robotic Weapons: US Army Innovation for Ground Combat in the Twenty-First Century
2015-05-21
2013, accessed March 29, 2015, http://www.bbc.com/news/magazine-21576376?print=true. 113 Steven Kotler, “Say Hello to Comrade Terminator: Russia’s... hello -to-comrade-terminator-russias-army-of- killer-robots/. 114 David Hambling, “Russia Wants Autonomous Fighting Robots, and Lots of Them: Putin’s...how-humans-respond-to- robots-knight/HumanRobot-PartnershipsR2.pdf?la=en. Kotler, Steven. “Say Hello to Comrade Terminator: Russia’s Army of
Sample Return Robot Centennial Challenge
2012-06-16
A visitor to the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event helps demonstrate how a NASA rover design enables the rover to climb over obstacles higher than it's own body on Saturday, June 16, 2012 at WPI in Worcester, Mass. The event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
NASA Technical Reports Server (NTRS)
Sandy, Michael
2015-01-01
The Regolith Advanced Surface Systems Operations Robot (RASSOR) Phase 2 is an excavation robot for mining regolith on a planet like Mars. The robot is programmed using the Robotic Operating System (ROS) and it also uses a physical simulation program called Gazebo. This internship focused on various functions of the program in order to make it a more professional and efficient robot. During the internship another project called the Smart Autonomous Sand-Swimming Excavator was worked on. This is a robot that is designed to dig through sand and extract sample material. The intern worked on programming the Sand-Swimming robot, and designing the electrical system to power and control the robot.
Wei, Kun; Ren, Bingyin
2018-02-13
In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.
Vision Based Autonomous Robotic Control for Advanced Inspection and Repair
NASA Technical Reports Server (NTRS)
Wehner, Walter S.
2014-01-01
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
Development of a semi-autonomous service robot with telerobotic capabilities
NASA Technical Reports Server (NTRS)
Jones, J. E.; White, D. R.
1987-01-01
The importance to the United States of semi-autonomous systems for application to a large number of manufacturing and service processes is very clear. Two principal reasons emerge as the primary driving forces for development of such systems: enhanced national productivity and operation in environments whch are hazardous to humans. Completely autonomous systems may not currently be economically feasible. However, autonomous systems that operate in a limited operation domain or that are supervised by humans are within the technology capability of this decade and will likely provide reasonable return on investment. The two research and development efforts of autonomy and telerobotics are distinctly different, yet interconnected. The first addresses the communication of an intelligent electronic system with a robot while the second requires human communication and ergonomic consideration. Discussed here are work in robotic control, human/robot team implementation, expert system robot operation, and sensor development by the American Welding Institute, MTS Systems Corporation, and the Colorado School of Mines--Center for Welding Research.
Sandia National Laboratories proof-of-concept robotic security vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrington, J.J.; Jones, D.P.; Klarer, P.R.
1989-01-01
Several years ago Sandia National Laboratories developed a prototype interior robot that could navigate autonomously inside a large complex building to air and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modified andmore » integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities. 2 refs., 3 figs.« less
SLAM algorithm applied to robotics assistance for navigation in unknown environments.
Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo
2010-02-17
The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.
Vision-based semi-autonomous outdoor robot system to reduce soldier workload
NASA Astrophysics Data System (ADS)
Richardson, Al; Rodgers, Michael H.
2001-09-01
Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.
NASA Astrophysics Data System (ADS)
Wang, Junhua; Hu, Meilin; Cai, Changsong; Lin, Zhongzheng; Li, Liang; Fang, Zhijian
2018-05-01
Wireless charging is the key technology to realize real autonomy of mobile robots. As the core part of wireless power transfer system, coupling mechanism including coupling coils and compensation topology is analyzed and optimized through simulations, to achieve stable and practical wireless charging suitable for ordinary robots. Multi-layer coil structure, especially double-layer coil is explored and selected to greatly enhance coupling performance, while shape of ferrite shielding goes through distributed optimization to guarantee coil fault tolerance and cost effectiveness. On the basis of optimized coils, primary compensation topology is analyzed to adopt composite LCL compensation, to stabilize operations of the primary side under variations of mutual inductance. Experimental results show the optimized system does make sense for wireless charging application for robots based on magnetic resonance coupling, to realize long-term autonomy of robots.
Spectrally Queued Feature Selection for Robotic Visual Odometery
2010-11-23
in these systems has yet to be defined. 1. INTRODUCTION 1.1 Uses of Autonomous Vehicles Autonomous vehicles have a wide range of possible...applications. In military situations, autonomous vehicles are valued for their ability to keep Soldiers far away from danger. A robot can inspect and disarm...just a glimpse of what engineers are hoping for in the future. 1.2 Biological Influence Autonomous vehicles are becoming more of a possibility in
Advances in Robotic, Human, and Autonomous Systems for Missions of Space Exploration
NASA Technical Reports Server (NTRS)
Gross, Anthony R.; Briggs, Geoffrey A.; Glass, Brian J.; Pedersen, Liam; Kortenkamp, David M.; Wettergreen, David S.; Nourbakhsh, I.; Clancy, Daniel J.; Zornetzer, Steven (Technical Monitor)
2002-01-01
Space exploration missions are evolving toward more complex architectures involving more capable robotic systems, new levels of human and robotic interaction, and increasingly autonomous systems. How this evolving mix of advanced capabilities will be utilized in the design of new missions is a subject of much current interest. Cost and risk constraints also play a key role in the development of new missions, resulting in a complex interplay of a broad range of factors in the mission development and planning of new missions. This paper will discuss how human, robotic, and autonomous systems could be used in advanced space exploration missions. In particular, a recently completed survey of the state of the art and the potential future of robotic systems, as well as new experiments utilizing human and robotic approaches will be described. Finally, there will be a discussion of how best to utilize these various approaches for meeting space exploration goals.
Development of autonomous grasping and navigating robot
NASA Astrophysics Data System (ADS)
Kudoh, Hiroyuki; Fujimoto, Keisuke; Nakayama, Yasuichi
2015-01-01
The ability to find and grasp target items in an unknown environment is important for working robots. We developed an autonomous navigating and grasping robot. The operations are locating a requested item, moving to where the item is placed, finding the item on a shelf or table, and picking the item up from the shelf or the table. To achieve these operations, we designed the robot with three functions: an autonomous navigating function that generates a map and a route in an unknown environment, an item position recognizing function, and a grasping function. We tested this robot in an unknown environment. It achieved a series of operations: moving to a destination, recognizing the positions of items on a shelf, picking up an item, placing it on a cart with its hand, and returning to the starting location. The results of this experiment show the applicability of reducing the workforce with robots.
Teleautonomous guidance for mobile robots
NASA Technical Reports Server (NTRS)
Borenstein, J.; Koren, Y.
1990-01-01
Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.
Autonomy in robots and other agents.
Smithers, T
1997-06-01
The word "autonomous" has become widely used in artificial intelligence, robotics, and, more recently, artificial life and is typically used to qualify types of systems, agents, or robots: we see terms like "autonomous systems," "autonomous agents," and "autonomous robots." Its use in these fields is, however, both weak, with no distinctions being made that are not better and more precisely made with other existing terms, and varied, with no single underlying concept being involved. This ill-disciplined usage contrasts strongly with the use of the same term in other fields such as biology, philosophy, ethics, law, and human rights, for example. In all these quite different areas the concept of autonomy is essentially the same, though the language used and the aspects and issues of concern, of course, differ. In all these cases the underlying notion is one of self-law making and the closely related concept of self-identity. In this paper I argue that the loose and varied use of the term autonomous in artificial intelligence, robotics, and artificial life has effectively robbed these fields of an important concept. A concept essentially the same as we find it in biology, philosophy, ethics, and law, and one that is needed to distinguish a particular kind of agent or robot from those developed and built so far. I suggest that robots and other agents will have to be autonomous, i.e., self-law making, not just self-regulating, if they are to be able effectively to deal with the kinds of environments in which we live and work: environments which have significant large scale spatial and temporal invariant structure, but which also have large amounts of local spatial and temporal dynamic variation and unpredictability, and which lead to the frequent occurrence of previously unexperienced situations for the agents that interact with them.
Distributed cooperating processes in a mobile robot control system
NASA Technical Reports Server (NTRS)
Skillman, Thomas L., Jr.
1988-01-01
A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.
NASA Astrophysics Data System (ADS)
Belyakov, Vladimir; Makarov, Vladimir; Zezyulin, Denis; Kurkin, Andrey; Pelinovsky, Efim
2015-04-01
Hazardous phenomena in the coastal zone lead to the topographic changing which are difficulty inspected by traditional methods. It is why those autonomous robots are used for collection of nearshore topographic and hydrodynamic measurements. The robot RTS-Hanna is well-known (Wubbold, F., Hentschel, M., Vousdoukas, M., and Wagner, B. Application of an autonomous robot for the collection of nearshore topographic and hydrodynamic measurements. Coastal Engineering Proceedings, 2012, vol. 33, Paper 53). We describe here several constructions of mobile systems developed in Laboratory "Transported Machines and Transported Complexes", Nizhny Novgorod State Technical University. They can be used in the field surveys and monitoring of wave regimes nearshore.
Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques
NASA Astrophysics Data System (ADS)
Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.
1999-08-01
A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.
Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits
NASA Astrophysics Data System (ADS)
Snider, Ross K.; Arathorn, David W.
2006-05-01
A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.
The effect of collision avoidance for autonomous robot team formation
NASA Astrophysics Data System (ADS)
Seidman, Mark H.; Yang, Shanchieh J.
2007-04-01
As technology and research advance to the era of cooperative robots, many autonomous robot team algorithms have emerged. Shape formation is a common and critical task in many cooperative robot applications. While theoretical studies of robot team formation have shown success, it is unclear whether such algorithms will perform well in a real-world environment. This work examines the effect of collision avoidance schemes on an ideal circle formation algorithm, but behaves similarly if robot-to-robot communications are in place. Our findings reveal that robots with basic collision avoidance capabilities are still able to form into a circle, under most conditions. Moreover, the robot sizes, sensing ranges, and other critical physical parameters are examined to determine their effects on algorithm's performance.
An integrated design and fabrication strategy for entirely soft, autonomous robots.
Wehner, Michael; Truby, Ryan L; Fitzgerald, Daniel J; Mosadegh, Bobak; Whitesides, George M; Lewis, Jennifer A; Wood, Robert J
2016-08-25
Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials. Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources. New strategies for creating completely soft robots, including soft analogues of these crucial components, are needed to realize their full potential. Here we report the untethered operation of a robot composed solely of soft materials. The robot is controlled with microfluidic logic that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply. Gas generated from the fuel decomposition inflates fluidic networks downstream of the reaction sites, resulting in actuation. The body and microfluidic logic of the robot are fabricated using moulding and soft lithography, respectively, and the pneumatic actuator networks, on-board fuel reservoirs and catalytic reaction chambers needed for movement are patterned within the body via a multi-material, embedded 3D printing technique. The fluidic and elastomeric architectures required for function span several orders of magnitude from the microscale to the macroscale. Our integrated design and rapid fabrication approach enables the programmable assembly of multiple materials within this architecture, laying the foundation for completely soft, autonomous robots.
Experiments in autonomous robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamel, W.R.
1987-01-01
The Center for Engineering Systems Advanced Research (CESAR) is performing basic research in autonomous robotics for energy-related applications in hazardous environments. The CESAR research agenda includes a strong experimental component to assure practical evaluation of new concepts and theories. An evolutionary sequence of mobile research robots has been planned to support research in robot navigation, world sensing, and object manipulation. A number of experiments have been performed in studying robot navigation and path planning with planar sonar sensing. Future experiments will address more complex tasks involving three-dimensional sensing, dexterous manipulation, and human-scale operations.
Sample Return Robot Centennial Challenge
2012-06-16
NASA Program Manager for Centennial Challenges Sam Ortega help show a young visitor how to drive a rover as part of the interactive NASA Mars rover exhibit during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Sample Return Robot Centennial Challenge
2012-06-16
NASA Deputy Administrator Lori Garver and NASA Chief Technologist Mason Peck stop to look at the bronze statue of the goat mascot for Worcester Polytechnic Institute (WPI) named "Gompei" that is wearing a staff t-shirt for the "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Autonomous Legged Hill and Stairwell Ascent
2011-11-01
environments with little burden to a human operator. Keywords: autonomous robot , hill climbing , stair climbing , sequential composition, hexapod, self...X-RHex robot on a set of stairs with laser scanner, IMU, wireless repeater, and handle payloads. making them useful for both climbing hills and...reconciliation into that more powerful (but restrictive) framework. 1) The Stair Climbing Behavior: RHex robots have been climbing single-flight stairs
Semi autonomous mine detection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Douglas Few; Roelof Versteeg; Herman Herman
2010-04-01
CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIKmore » was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.« less
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team KuuKulgur waits to begin the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
An Intelligent Agent-Controlled and Robot-Based Disassembly Assistant
NASA Astrophysics Data System (ADS)
Jungbluth, Jan; Gerke, Wolfgang; Plapper, Peter
2017-09-01
One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate “the problem of loosening a connection and the distribution of the work” as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.
Controlling Herds of Cooperative Robots
NASA Technical Reports Server (NTRS)
Quadrelli, Marco B.
2006-01-01
A document poses, and suggests a program of research for answering, questions of how to achieve autonomous operation of herds of cooperative robots to be used in exploration and/or colonization of remote planets. In a typical scenario, a flock of mobile sensory robots would be deployed in a previously unexplored region, one of the robots would be designated the leader, and the leader would issue commands to move the robots to different locations or aim sensors at different targets to maximize scientific return. It would be necessary to provide for this hierarchical, cooperative behavior even in the face of such unpredictable factors as terrain obstacles. A potential-fields approach is proposed as a theoretical basis for developing methods of autonomous command and guidance of a herd. A survival-of-the-fittest approach is suggested as a theoretical basis for selection, mutation, and adaptation of a description of (1) the body, joints, sensors, actuators, and control computer of each robot, and (2) the connectivity of each robot with the rest of the herd, such that the herd could be regarded as consisting of a set of artificial creatures that evolve to adapt to a previously unknown environment. A distributed simulation environment has been developed to test the proposed approaches in the Titan environment. One blimp guides three surface sondes via a potential field approach. The results of the simulation demonstrate that the method used for control is feasible, even if significant uncertainty exists in the dynamics and environmental models, and that the control architecture provides the autonomy needed to enable surface science data collection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harber, K.S.; Pin, F.G.
1990-03-01
The US DOE Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) and the Commissariat a l'Energie Atomique's (CEA) Office de Robotique et Productique within the Directorat a la Valorization are working toward a long-term cooperative agreement and relationship in the area of Intelligent Systems Research (ISR). This report presents the proceedings of the first CESAR/CEA Workshop on Autonomous Mobile Robots which took place at ORNL on May 30, 31 and June 1, 1989. The purpose of the workshop was to present and discuss methodologies and algorithms under development at the two facilities in themore » area of perception and navigation for autonomous mobile robots in unstructured environments. Experimental demonstration of the algorithms and comparison of some of their features were proposed to take place within the framework of a previously mutually agreed-upon demonstration scenario or base-case.'' The base-case scenario described in detail in Appendix A, involved autonomous navigation by the robot in an a priori unknown environment with dynamic obstacles, in order to reach a predetermined goal. From the intermediate goal location, the robot had to search for and locate a control panel, move toward it, and dock in front of the panel face. The CESAR demonstration was successfully accomplished using the HERMIES-IIB robot while subsets of the CEA demonstration performed using the ARES robot simulation and animation system were presented. The first session of the workshop focused on these experimental demonstrations and on the needs and considerations for establishing benchmarks'' for testing autonomous robot control algorithms.« less
Navigation strategies for multiple autonomous mobile robots moving in formation
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1991-01-01
The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.
Working and Learning with Knowledge in the Lobes of a Humanoid's Mind
NASA Technical Reports Server (NTRS)
Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David
2003-01-01
Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).
Semi-autonomous exploration of multi-floor buildings with a legged robot
NASA Astrophysics Data System (ADS)
Wenger, Garrett J.; Johnson, Aaron M.; Taylor, Camillo J.; Koditschek, Daniel E.
2015-05-01
This paper presents preliminary results of a semi-autonomous building exploration behavior using the hexapedal robot RHex. Stairwells are used in virtually all multi-floor buildings, and so in order for a mobile robot to effectively explore, map, clear, monitor, or patrol such buildings it must be able to ascend and descend stairwells. However most conventional mobile robots based on a wheeled platform are unable to traverse stairwells, motivating use of the more mobile legged machine. This semi-autonomous behavior uses a human driver to provide steering input to the robot, as would be the case in, e.g., a tele-operated building exploration mission. The gait selection and transitions between the walking and stair climbing gaits are entirely autonomous. This implementation uses an RGBD camera for stair acquisition, which offers several advantages over a previously documented detector based on a laser range finder, including significantly reduced acquisition time. The sensor package used here also allows for considerable expansion of this behavior. For example, complete automation of the building exploration task driven by a mapping algorithm and higher level planner is presently under development.
Behavior-based multi-robot collaboration for autonomous construction tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.
A robotic system for researching social integration in honeybees.
Griparić, Karlo; Haus, Tomislav; Miklić, Damjan; Polić, Marsela; Bogdan, Stjepan
2017-01-01
In this paper, we present a novel robotic system developed for researching collective social mechanisms in a biohybrid society of robots and honeybees. The potential for distributed coordination, as observed in nature in many different animal species, has caused an increased interest in collective behaviour research in recent years because of its applicability to a broad spectrum of technical systems requiring robust multi-agent control. One of the main problems is understanding the mechanisms driving the emergence of collective behaviour of social animals. With the aim of deepening the knowledge in this field, we have designed a multi-robot system capable of interacting with honeybees within an experimental arena. The final product, stationary autonomous robot units, designed by specificaly considering the physical, sensorimotor and behavioral characteristics of the honeybees (lat. Apis mallifera), are equipped with sensing, actuating, computation, and communication capabilities that enable the measurement of relevant environmental states, such as honeybee presence, and adequate response to the measurements by generating heat, vibration and airflow. The coordination among robots in the developed system is established using distributed controllers. The cooperation between the two different types of collective systems is realized by means of a consensus algorithm, enabling the honeybees and the robots to achieve a common objective. Presented results, obtained within ASSISIbf project, show successful cooperation indicating its potential for future applications.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1990-01-01
A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.
SLAM algorithm applied to robotics assistance for navigation in unknown environments
2010-01-01
Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735
Autonomous mobile robot research using the HERMIES-III robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; Beckerman, M.; Spelt, P.F.
1989-01-01
This paper reports on the status and future directions in the research, development and experimental validation of intelligent control techniques for autonomous mobile robots using the HERMIES-III robot at the Center for Engineering Systems Advanced research (CESAR) at Oak Ridge National Laboratory (ORNL). HERMIES-III is the fourth robot in a series of increasingly more sophisticated and capable experimental test beds developed at CESAR. HERMIES-III is comprised of a battery powered, onmi-directional wheeled platform with a seven degree-of-freedom manipulator arm, video cameras, sonar range sensors, laser imaging scanner and a dual computer system containing up to 128 NCUBE nodes in hypercubemore » configuration. All electronics, sensors, computers, and communication equipment required for autonomous operation of HERMIES-III are located on board along with sufficient battery power for three to four hours of operation. The paper first provides a more detailed description of the HERMIES-III characteristics, focussing on the new areas of research and demonstration now possible at CESAR with this new test-bed. The initial experimental program is then described with emphasis placed on autonomous performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES- III). The paper concludes with a discussion of the integration problems and safety considerations necessarily arising from the set-up of an experimental program involving human-scale, multi-autonomous mobile robots performance. 10 refs., 3 figs.« less
Mergeable nervous systems for robots.
Mathews, Nithin; Christensen, Anders Lyhne; O'Grady, Rehan; Mondada, Francesco; Dorigo, Marco
2017-09-12
Robots have the potential to display a higher degree of lifetime morphological adaptation than natural organisms. By adopting a modular approach, robots with different capabilities, shapes, and sizes could, in theory, construct and reconfigure themselves as required. However, current modular robots have only been able to display a limited range of hardwired behaviors because they rely solely on distributed control. Here, we present robots whose bodies and control systems can merge to form entirely new robots that retain full sensorimotor control. Our control paradigm enables robots to exhibit properties that go beyond those of any existing machine or of any biological organism: the robots we present can merge to form larger bodies with a single centralized controller, split into separate bodies with independent controllers, and self-heal by removing or replacing malfunctioning body parts. This work takes us closer to robots that can autonomously change their size, form and function.Robots that can self-assemble into different morphologies are desired to perform tasks that require different physical capabilities. Mathews et al. design robots whose bodies and control systems can merge and split to form new robots that retain full sensorimotor control and act as a single entity.
Material identification based on electrostatic sensing technology
NASA Astrophysics Data System (ADS)
Liu, Kai; Chen, Xi; Li, Jingnan
2018-04-01
When the robot travels on the surface of different media, the uncertainty of the medium will seriously affect the autonomous action of the robot. In this paper, the distribution characteristics of multiple electrostatic charges on the surface of materials are detected, so as to improve the accuracy of the existing electrostatic signal material identification methods, which is of great significance to help the robot optimize the control algorithm. In this paper, based on the electrostatic signal material identification method proposed by predecessors, the multi-channel detection circuit is used to obtain the electrostatic charge distribution at different positions of the material surface, the weights are introduced into the eigenvalue matrix, and the weight distribution is optimized by the evolutionary algorithm, which makes the eigenvalue matrix more accurately reflect the surface charge distribution characteristics of the material. The matrix is used as the input of the k-Nearest Neighbor (kNN)classification algorithm to classify the dielectric materials. The experimental results show that the proposed method can significantly improve the recognition rate of the existing electrostatic signal material recognition methods.
An autonomous satellite architecture integrating deliberative reasoning and behavioural intelligence
NASA Technical Reports Server (NTRS)
Lindley, Craig A.
1993-01-01
This paper describes a method for the design of autonomous spacecraft, based upon behavioral approaches to intelligent robotics. First, a number of previous spacecraft automation projects are reviewed. A methodology for the design of autonomous spacecraft is then presented, drawing upon both the European Space Agency technological center (ESTEC) automation and robotics methodology and the subsumption architecture for autonomous robots. A layered competency model for autonomous orbital spacecraft is proposed. A simple example of low level competencies and their interaction is presented in order to illustrate the methodology. Finally, the general principles adopted for the control hardware design of the AUSTRALIS-1 spacecraft are described. This system will provide an orbital experimental platform for spacecraft autonomy studies, supporting the exploration of different logical control models, different computational metaphors within the behavioral control framework, and different mappings from the logical control model to its physical implementation.
Autonomous assistance navigation for robotic wheelchairs in confined spaces.
Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F
2010-01-01
In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.
Tracked robot controllers for climbing obstacles autonomously
NASA Astrophysics Data System (ADS)
Vincent, Isabelle
2009-05-01
Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.
Crew/Robot Coordinated Planetary EVA Operations at a Lunar Base Analog Site
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Bluethmann, W. J.; Delgado, F. J.; Herrera, E.; Kosmo, J. J.; Janoiko, B. A.; Wilcox, B. H.; Townsend, J. A.; Matthews, J. B.;
2007-01-01
Under the direction of NASA's Exploration Technology Development Program, robots and space suited subjects from several NASA centers recently completed a very successful demonstration of coordinated activities indicative of base camp operations on the lunar surface. For these activities, NASA chose a site near Meteor Crater, Arizona close to where Apollo Astronauts previously trained. The main scenario demonstrated crew returning from a planetary EVA (extra-vehicular activity) to a temporary base camp and entering a pressurized rover compartment while robots performed tasks in preparation for the next EVA. Scenario tasks included: rover operations under direct human control and autonomous modes, crew ingress and egress activities, autonomous robotic payload removal and stowage operations under both local control and remote control from Houston, and autonomous robotic navigation and inspection. In addition to the main scenario, participants had an opportunity to explore additional robotic operations: hill climbing, maneuvering heaving loads, gathering geo-logical samples, drilling, and tether operations. In this analog environment, the suited subjects and robots experienced high levels of dust, rough terrain, and harsh lighting.
Multidisciplinary unmanned technology teammate (MUTT)
NASA Astrophysics Data System (ADS)
Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark
2013-01-01
The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.
Fully decentralized control of a soft-bodied robot inspired by true slime mold.
Umedachi, Takuya; Takeda, Koichi; Nakagaki, Toshiyuki; Kobayashi, Ryo; Ishiguro, Akio
2010-03-01
Animals exhibit astoundingly adaptive and supple locomotion under real world constraints. In order to endow robots with similar capabilities, we must implement many degrees of freedom, equivalent to animals, into the robots' bodies. For taming many degrees of freedom, the concept of autonomous decentralized control plays a pivotal role. However a systematic way of designing such autonomous decentralized control system is still missing. Aiming at understanding the principles that underlie animals' locomotion, we have focused on a true slime mold, a primitive living organism, and extracted a design scheme for autonomous decentralized control system. In order to validate this design scheme, this article presents a soft-bodied amoeboid robot inspired by the true slime mold. Significant features of this robot are twofold: (1) the robot has a truly soft and deformable body stemming from real-time tunable springs and protoplasm, the former is used for an outer skin of the body and the latter is to satisfy the law of conservation of mass; and (2) fully decentralized control using coupled oscillators with completely local sensory feedback mechanism is realized by exploiting the long-distance physical interaction between the body parts stemming from the law of conservation of protoplasmic mass. Simulation results show that this robot exhibits highly supple and adaptive locomotion without relying on any hierarchical structure. The results obtained are expected to shed new light on design methodology for autonomous decentralized control system.
ARK: Autonomous mobile robot in an industrial environment
NASA Technical Reports Server (NTRS)
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
1994-01-01
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
Research state-of-the-art of mobile robots in China
NASA Astrophysics Data System (ADS)
Wu, Lin; Zhao, Jinglun; Zhang, Peng; Li, Shiqing
1991-03-01
Several newly developed mobile robots in china are described in the paper. It includes masterslave telerobot sixleged robot biped walking robot remote inspection robot crawler moving robot and autonomous mobi le vehicle . Some relevant technology are also described.
Full autonomous microline trace robot
NASA Astrophysics Data System (ADS)
Yi, Deer; Lu, Si; Yan, Yingbai; Jin, Guofan
2000-10-01
Optoelectric inspection may find applications in robotic system. In micro robotic system, smaller optoelectric inspection system is preferred. However, as miniaturizing the size of the robot, the number of the optoelectric detector becomes lack. And lack of the information makes the micro robot difficult to acquire its status. In our lab, a micro line trace robot has been designed, which autonomous acts based on its optoelectric detection. It has been programmed to follow a black line printed on the white colored ground. Besides the optoelectric inspection, logical algorithm in the microprocessor is also important. In this paper, we propose a simply logical algorithm to realize robot's intelligence. The robot's intelligence is based on a AT89C2051 microcontroller which controls its movement. The technical details of the micro robot are as follow: dimension: 30mm*25mm*35*mm; velocity: 60mm/s.
Fusing Laser Reflectance and Image Data for Terrain Classification for Small Autonomous Robots
2014-12-01
limit us to low power, lightweight sensors , and a maximum range of approximately 5 meters. Contrast these robot characteristics to typical terrain...classifi- cation work which uses large autonomous ground vehicles with sensors mounted high above the ground. Terrain classification for small autonomous...into predefined classes [10], [11]. However, wheeled vehicles offer the ability to use non-traditional sensors such as vibration sensors [12] and
INL Autonomous Navigation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.
A simple, inexpensive, and effective implementation of a vision-guided autonomous robot
NASA Astrophysics Data System (ADS)
Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James
2006-10-01
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.
Solar Thermal Utility-Scale Joint Venture Program (USJVP) Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANCINI,THOMAS R.
2001-04-01
Several years ago Sandia National Laboratories developed a prototype interior robot [1] that could navigate autonomously inside a large complex building to aid and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modifiedmore » and integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities.« less
On-rail solution for autonomous inspections in electrical substations
NASA Astrophysics Data System (ADS)
Silva, Bruno P. A.; Ferreira, Rafael A. M.; Gomes, Selson C.; Calado, Flavio A. R.; Andrade, Roberto M.; Porto, Matheus P.
2018-05-01
This work presents an alternative solution for autonomous inspections in electrical substations. The autonomous system is a robot that moves on rails, collects infrared and visible images of selected targets, also processes the data and predicts the components lifetime. The robot moves on rails to overcome difficulties found in not paved substations commonly encountered in Brazil. We take advantage of using rails to convey the data by them, minimizing the electromagnetic interference, and at the same time transmitting electrical energy to feed the autonomous system. As part of the quality control process, we compared thermographic inspections made by the robot with inspections made by a trained thermographer using a scientific camera Flir® SC660. The results have shown that the robot achieved satisfactory results, identifying components and measuring temperature accurately. The embodied routine considers the weather changes along the day, providing a standard result of the components thermal response, also gives the uncertainty of temperature measurement, contributing to the quality in the decision making process.
Adaptive Behavior for Mobile Robots
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2009-01-01
The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.
Autonomous learning in humanoid robotics through mental imagery.
Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo
2013-05-01
In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.
AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program
NASA Astrophysics Data System (ADS)
Gothard, Benny M.
2002-02-01
One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.
On-Line Point Positioning with Single Frame Camera Data
1992-03-15
tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile
Inexpensive robots used to teach dc circuits and electronics
NASA Astrophysics Data System (ADS)
Sidebottom, David L.
2017-05-01
This article describes inexpensive, autonomous robots, built without microprocessors, used in a college-level introductory physics laboratory course to motivate student learning of dc circuits. Detailed circuit descriptions are provided as well as a week-by-week course plan that can guide students from elementary dc circuits, through Kirchhoff's laws, and into simple analog integrated circuits with the motivational incentive of building an autonomous robot that can compete with others in a public arena.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Sample Return Robot Challenge staff members confer before the team Survey robots makes it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
A robot from the University of Waterloo Robotics Team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Robot Lies in Health Care: When Is Deception Morally Permissible?
Matthias, Andreas
2015-06-01
Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot's workings, capabilities, and internal structure. The robot's real capabilities may diverge from this mental model to the extent that one might accuse the robot's manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.
Supervised autonomous robotic soft tissue surgery.
Shademan, Azad; Decker, Ryan S; Opfermann, Justin D; Leonard, Simon; Krieger, Axel; Kim, Peter C W
2016-05-04
The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques. Copyright © 2016, American Association for the Advancement of Science.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team KuuKulgur watches as their robots attempt the level one competition during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Retrievers team robot is seen as it attempts the level one challenge the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Plugin-docking system for autonomous charging using particle filter
NASA Astrophysics Data System (ADS)
Koyasu, Hiroshi; Wada, Masayoshi
2017-03-01
Autonomous charging of the robot battery is one of the key functions for the sake of expanding working areas of the robots. To realize it, most of existing systems use custom docking stations or artificial markers. By the other words, they can only charge on a few specific outlets. If the limit can be removed, working areas of the robots significantly expands. In this paper, we describe a plugin-docking system for the autonomous charging, which does not require any custom docking stations or artificial markers. A single camera is used for recognizing the 3D position of an outlet socket. A particle filter-based image tracking algorithm which is robust to the illumination change is applied. The algorithm is implemented on a robot with an omnidirectional moving system. The experimental results show the effectiveness of our system.
A Review of Robotics in Neurorehabilitation: Towards an Automated Process for Upper Limb
Sánchez-Herrera, P.; Balaguer, C.; Jardón, A.
2018-01-01
Robot-mediated neurorehabilitation is a growing field that seeks to incorporate advances in robotics combined with neuroscience and rehabilitation to define new methods for treating problems related with neurological diseases. In this paper, a systematic literature review is conducted to identify the contribution of robotics for upper limb neurorehabilitation, highlighting its relation with the rehabilitation cycle, and to clarify the prospective research directions in the development of more autonomous rehabilitation processes. With this aim, first, a study and definition of a general rehabilitation process are made, and then, it is particularized for the case of neurorehabilitation, identifying the components involved in the cycle and their degree of interaction between them. Next, this generic process is compared with the current literature in robotics focused on upper limb treatment, analyzing which components of this rehabilitation cycle are being investigated. Finally, the challenges and opportunities to obtain more autonomous rehabilitation processes are discussed. In addition, based on this study, a series of technical requirements that should be taken into account when designing and implementing autonomous robotic systems for rehabilitation is presented and discussed. PMID:29707189
Dynamic multisensor fusion for mobile robot navigation in an indoor environment
NASA Astrophysics Data System (ADS)
Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.
2001-10-01
In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.
NASA Astrophysics Data System (ADS)
Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi
This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel
2013-01-01
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel
2012-12-27
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.
Soft Dielectric Elastomer Oscillators Driving Bioinspired Robots.
Henke, E-F Markus; Schlatter, Samuel; Anderson, Iain A
2017-12-01
Entirely soft robots with animal-like behavior and integrated artificial nervous systems will open up totally new perspectives and applications. To produce them, we must integrate control and actuation in the same soft structure. Soft actuators (e.g., pneumatic and hydraulic) exist but electronics are hard and stiff and remotely located. We present novel soft, electronics-free dielectric elastomer oscillators, which are able to drive bioinspired robots. As a demonstrator, we present a robot that mimics the crawling motion of the caterpillar, with an integrated artificial nervous system, soft actuators and without any conventional stiff electronic parts. Supplied with an external DC voltage, the robot autonomously generates all signals that are necessary to drive its dielectric elastomer actuators, and it translates an in-plane electromechanical oscillation into a crawling locomotion movement. Therefore, all functional and supporting parts are made of polymer materials and carbon. Besides the basic design of this first electronic-free, biomimetic robot, we present prospects to control the general behavior of such robots. The absence of conventional stiff electronics and the exclusive use of polymeric materials will provide a large step toward real animal-like robots, compliant human machine interfaces, and a new class of distributed, neuron-like internal control for robotic systems.
Autonomous Realtime Threat-Hunting Robot (ARTHR
DOE Office of Scientific and Technical Information (OSTI.GOV)
INL
2008-05-29
Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats.
Autonomous Realtime Threat-Hunting Robot (ARTHR
INL
2017-12-09
Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats.
Supervisory autonomous local-remote control system design: Near-term and far-term applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul
1993-01-01
The JPL Supervisory Telerobotics Laboratory (STELER) has developed a unique local-remote robot control architecture which enables management of intermittent bus latencies and communication delays such as those expected for ground-remote operation of Space Station robotic systems via the TDRSS communication platform. At the local site, the operator updates the work site world model using stereo video feedback and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. The operator can then employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the object under any degree of time-delay. The remote site performs the closed loop force/torque control, task monitoring, and reflex action. This paper describes the STELER local-remote robot control system, and further describes the near-term planned Space Station applications, along with potential far-term applications such as telescience, autonomous docking, and Lunar/Mars rovers.
An architectural approach to create self organizing control systems for practical autonomous robots
NASA Technical Reports Server (NTRS)
Greiner, Helen
1991-01-01
For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.
Autonomous bone reposition around anatomical landmark for robot-assisted orthognathic surgery.
Woo, Sang-Yoon; Lee, Sang-Jeong; Yoo, Ji-Yong; Han, Jung-Joon; Hwang, Soon-Jung; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Yi, Won-Jin
2017-12-01
The purpose of this study was to develop a new method for enabling a robot to assist a surgeon in repositioning a bone segment to accurately transfer a preoperative virtual plan into the intraoperative phase in orthognathic surgery. We developed a robot system consisting of an arm with six degrees of freedom, a robot motion-controller, and a PC. An end-effector at the end of the robot arm transferred the movements of the robot arm to the patient's jawbone. The registration between the robot and CT image spaces was performed completely preoperatively, and the intraoperative registration could be finished using only position changes of the tracking tools at the robot end-effector and the patient's splint. The phantom's maxillomandibular complex (MMC) connected to the robot's end-effector was repositioned autonomously by the robot movements around an anatomical landmark of interest based on the tool center point (TCP) principle. The robot repositioned the MMC around the TCP of the incisor of the maxilla and the pogonion of the mandible following plans for real orthognathic patients. The accuracy of the robot's repositioning increased when an anatomical landmark for the TCP was close to the registration fiducials. In spite of this influence, we could increase the repositioning accuracy at the landmark by using the landmark itself as the TCP. With its ability to incorporate virtual planning using a CT image and autonomously execute the plan around an anatomical landmark of interest, the robot could help surgeons reposition bones more accurately and dexterously. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Krotkov, Eric; Simmons, Reid; Whittaker, William
1992-02-01
This report describes progress in research on an autonomous robot for planetary exploration performed during 1991 at the Robotics Institute, Carnegie Mellon University. The report summarizes the achievements during calendar year 1991, and lists personnel and publications. In addition, it includes several papers resulting from the research. Research in 1991 focused on understanding the unique capabilities of the Ambler mechanism and on autonomous walking in rough, natural terrain. We also designed a sample acquisition system, and began to configure a successor to the Ambler.
Toward Autonomous Multi-floor Exploration: Ascending Stairway Localization and Modeling
2013-03-01
robots have traditionally been restricted to single floors of a building or outdoor areas free of abrupt elevation changes such as curbs and stairs ...solution to this problem and is motivated by the rich potential of an autonomous ground robot that can climb stairs while exploring a multi-floor...parameters of the stairways, the robot could plan a path that traverses the stairs in order to explore the frontier at other elevations that were previously
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
Automation and robotics technology for intelligent mining systems
NASA Technical Reports Server (NTRS)
Welsh, Jeffrey H.
1989-01-01
The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.
Science, technology and the future of small autonomous drones.
Floreano, Dario; Wood, Robert J
2015-05-28
We are witnessing the advent of a new era of robots - drones - that can autonomously fly in natural and man-made environments. These robots, often associated with defence applications, could have a major impact on civilian tasks, including transportation, communication, agriculture, disaster mitigation and environment preservation. Autonomous flight in confined spaces presents great scientific and technical challenges owing to the energetic cost of staying airborne and to the perceptual intelligence required to negotiate complex environments. We identify scientific and technological advances that are expected to translate, within appropriate regulatory frameworks, into pervasive use of autonomous drones for civilian applications.
A design strategy for autonomous systems
NASA Technical Reports Server (NTRS)
Forster, Pete
1989-01-01
Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.
NASA Technical Reports Server (NTRS)
Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.
1994-01-01
A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.
Rice-obot 1: An intelligent autonomous mobile robot
NASA Technical Reports Server (NTRS)
Defigueiredo, R.; Ciscon, L.; Berberian, D.
1989-01-01
The Rice-obot I is the first in a series of Intelligent Autonomous Mobile Robots (IAMRs) being developed at Rice University's Cooperative Intelligent Mobile Robots (CIMR) lab. The Rice-obot I is mainly designed to be a testbed for various robotic and AI techniques, and a platform for developing intelligent control systems for exploratory robots. Researchers present the need for a generalized environment capable of combining all of the control, sensory and knowledge systems of an IAMR. They introduce Lisp-Nodes as such a system, and develop the basic concepts of nodes, messages and classes. Furthermore, they show how the control system of the Rice-obot I is implemented as sub-systems in Lisp-Nodes.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Autonomous Evolution of Dynamic Gaits with Two Quadruped Robots
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.; Takamura, Seichi; Yamamoto, Takashi; Fujita, Masahiro
2004-01-01
A challenging task that must be accomplished for every legged robot is creating the walking and running behaviors needed for it to move. In this paper we describe our system for autonomously evolving dynamic gaits on two of Sony's quadruped robots. Our evolutionary algorithm runs on board the robot and uses the robot's sensors to compute the quality of a gait without assistance from the experimenter. First we show the evolution of a pace and trot gait on the OPEN-R prototype robot. With the fastest gait, the robot moves at over 10/min/min., which is more than forty body-lengths/min. While these first gaits are somewhat sensitive to the robot and environment in which they are evolved, we then show the evolution of robust dynamic gaits, one of which is used on the ERS-110, the first consumer version of AIBO.
An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry
NASA Astrophysics Data System (ADS)
Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro
This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The University of Waterloo Robotics Team, from Canada, prepares to place their robot on the start platform during the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
The University of Waterloo Robotics Team, from Ontario, Canada, prepares their robot for the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The team from the University of Waterloo is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Yan, Xiaodan
2010-01-01
The current study investigated the functional connectivity of the primary sensory system with resting state fMRI and applied such knowledge into the design of the neural architecture of autonomous humanoid robots. Correlation and Granger causality analyses were utilized to reveal the functional connectivity patterns. Dissociation was within the primary sensory system, in that the olfactory cortex and the somatosensory cortex were strongly connected to the amygdala whereas the visual cortex and the auditory cortex were strongly connected with the frontal cortex. The posterior cingulate cortex (PCC) and the anterior cingulate cortex (ACC) were found to maintain constant communication with the primary sensory system, the frontal cortex, and the amygdala. Such neural architecture inspired the design of dissociated emergent-response system and fine-processing system in autonomous humanoid robots, with separate processing units and another consolidation center to coordinate the two systems. Such design can help autonomous robots to detect and respond quickly to danger, so as to maintain their sustainability and independence.
Learning tactile skills through curious exploration
Pape, Leo; Oddo, Calogero M.; Controzzi, Marco; Cipriani, Christian; Förster, Alexander; Carrozza, Maria C.; Schmidhuber, Jürgen
2012-01-01
We present curiosity-driven, autonomous acquisition of tactile exploratory skills on a biomimetic robot finger equipped with an array of microelectromechanical touch sensors. Instead of building tailored algorithms for solving a specific tactile task, we employ a more general curiosity-driven reinforcement learning approach that autonomously learns a set of motor skills in absence of an explicit teacher signal. In this approach, the acquisition of skills is driven by the information content of the sensory input signals relative to a learner that aims at representing sensory inputs using fewer and fewer computational resources. We show that, from initially random exploration of its environment, the robotic system autonomously develops a small set of basic motor skills that lead to different kinds of tactile input. Next, the system learns how to exploit the learned motor skills to solve supervised texture classification tasks. Our approach demonstrates the feasibility of autonomous acquisition of tactile skills on physical robotic platforms through curiosity-driven reinforcement learning, overcomes typical difficulties of engineered solutions for active tactile exploration and underactuated control, and provides a basis for studying developmental learning through intrinsic motivation in robots. PMID:22837748
Concurrent planning and execution for a walking robot
NASA Astrophysics Data System (ADS)
Simmons, Reid
1990-07-01
The Planetary Rover project is developing the Ambler, a novel legged robot, and an autonomous software system for walking the Ambler over rough terrain. As part of the project, we have developed a system that integrates perception, planning, and real-time control to navigate a single leg of the robot through complex obstacle courses. The system is integrated using the Task Control Architecture (TCA), a general-purpose set of utilities for building and controlling distributed mobile robot systems. The walking system, as originally implemented, utilized a sequential sense-plan-act control cycle. This report describes efforts to improve the performance of the system by concurrently planning and executing steps. Concurrency was achieved by modifying the existing sequential system to utilize TCA features such as resource management, monitors, temporal constraints, and hierarchical task trees. Performance was increased in excess of 30 percent with only a relatively modest effort to convert and test the system. The results lend support to the utility of using TCA to develop complex mobile robot systems.
Girard, B; Tabareau, N; Pham, Q C; Berthoz, A; Slotine, J-J
2008-05-01
Action selection, the problem of choosing what to do next, is central to any autonomous agent architecture. We use here a multi-disciplinary approach at the convergence of neuroscience, dynamical system theory and autonomous robotics, in order to propose an efficient action selection mechanism based on a new model of the basal ganglia. We first describe new developments of contraction theory regarding locally projected dynamical systems. We exploit these results to design a stable computational model of the cortico-baso-thalamo-cortical loops. Based on recent anatomical data, we include usually neglected neural projections, which participate in performing accurate selection. Finally, the efficiency of this model as an autonomous robot action selection mechanism is assessed in a standard survival task. The model exhibits valuable dithering avoidance and energy-saving properties, when compared with a simple if-then-else decision rule.
Design of an autonomous exterior security robot
NASA Technical Reports Server (NTRS)
Myers, Scott D.
1994-01-01
This paper discusses the requirements and preliminary design of robotic vehicle designed for performing autonomous exterior perimeter security patrols around warehouse areas, ammunition supply depots, and industrial parks for the U.S. Department of Defense. The preliminary design allows for the operation of up to eight vehicles in a six kilometer by six kilometer zone with autonomous navigation and obstacle avoidance. In addition to detection of crawling intruders at 100 meters, the system must perform real-time inventory checking and database comparisons using a microwave tags system.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
A team KuuKulgur Robot from Estonia is seen on the practice field during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team KuuKulgur is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Sam Ortega, NASA program manager of Centennial Challenges, watches as robots attempt the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
The team Survey robot retrieves a sample during a demonstration of the level two challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The team AERO robot drives off the starting platform during the level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Team Cephal's robot is seen on the starting platform during a rerun of the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Oregon State University Mars Rover Team's robot is seen during level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
Jerry Waechter of team Middleman from Dunedin, Florida, works on their robot named Ro-Bear during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Middleman is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
A robot from the Intrepid Systems team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
A team KuuKulgur robot is seen as it begins the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The team Mountaineers robot is seen as it attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Members of the Oregon State University Mars Rover Team prepare their robot to attempt the level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Stellar Automation Systems team poses for a picture with their robot after attempting the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
The team Survey robot is seen as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
All four of team KuuKulgur's robots are seen as they attempt the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Spectators watch as the team Survey robot conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team Middleman's robot, Ro-Bear, is seen as it starts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
The team Mountaineers robot is seen after picking up the sample during a rerun of the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Two of team KuuKulgur's robots are seen as they attempt a rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Members of team Survey follow their robot as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
A team KuuKulgur robot approaches the sample as it attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
The team survey robot is seen on the starting platform before begging it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Mountaineers team from West Virginia University, watches as their robot attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
The team Survey robot is seen as it conducts a demonstration of the level two challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Team Survey's robot is seen as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Behavior-Based Multi-Robot Collaboration for Autonomous Construction Tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
We present a heterogeneous multi-robot system for autonomous construction of a structure through assembly of long components. Placement of a component within an existing structure in a realistic environment is demonstrated on a two-robot team. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. Far adaptability, the system is designed as a behavior-based architecture. Far applicability to space-related construction efforts, computation, power, communication, and sensing are minimized, though the techniques developed are also applicable to terrestrial construction tasks.
Autonomous Navigation, Dynamic Path and Work Flow Planning in Multi-Agent Robotic Swarms Project
NASA Technical Reports Server (NTRS)
Falker, John; Zeitlin, Nancy; Leucht, Kurt; Stolleis, Karl
2015-01-01
Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots, called Swarmies, to be used as a ground-based research platform for in-situ resource utilization missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in an unknown environment and return those resources to a central site.
Automatic tracking of laparoscopic instruments for autonomous control of a cameraman robot.
Khoiy, Keyvan Amini; Mirbagheri, Alireza; Farahmand, Farzam
2016-01-01
An automated instrument tracking procedure was designed and developed for autonomous control of a cameraman robot during laparoscopic surgery. The procedure was based on an innovative marker-free segmentation algorithm for detecting the tip of the surgical instruments in laparoscopic images. A compound measure of Saturation and Value components of HSV color space was incorporated that was enhanced further using the Hue component and some essential characteristics of the instrument segment, e.g., crossing the image boundaries. The procedure was then integrated into the controlling system of the RoboLens cameraman robot, within a triple-thread parallel processing scheme, such that the tip is always kept at the center of the image. Assessment of the performance of the system on prerecorded real surgery movies revealed an accuracy rate of 97% for high quality images and about 80% for those suffering from poor lighting and/or blood, water and smoke noises. A reasonably satisfying performance was also observed when employing the system for autonomous control of the robot in a laparoscopic surgery phantom, with a mean time delay of 200ms. It was concluded that with further developments, the proposed procedure can provide a practical solution for autonomous control of cameraman robots during laparoscopic surgery operations.
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.
Open Issues in Evolutionary Robotics.
Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.
Autonomous Realtime Threat-Hunting Robot (ARTHR)
Idaho National Laboratory - David Bruemmer, Curtis Nielsen
2017-12-09
Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats. To learn more, visit
New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots
Gonzalez-de-Soto, Mariano; Pajares, Gonzalo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976
New trends in robotics for agriculture: integration and assessment of a real fleet of robots.
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.
Developing operation algorithms for vision subsystems in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Shikhman, M. V.; Shidlovskiy, S. V.
2018-05-01
The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.
Bourbakis, N G
1997-01-01
This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.
The Adam and Eve Robot Scientists for the Automated Discovery of Scientific Knowledge
NASA Astrophysics Data System (ADS)
King, Ross
A Robot Scientist is a physically implemented robotic system that applies techniques from artificial intelligence to execute cycles of automated scientific experimentation. A Robot Scientist can automatically execute cycles of hypothesis formation, selection of efficient experiments to discriminate between hypotheses, execution of experiments using laboratory automation equipment, and analysis of results. The motivation for developing Robot Scientists is to better understand science, and to make scientific research more efficient. The Robot Scientist `Adam' was the first machine to autonomously discover scientific knowledge: both form and experimentally confirm novel hypotheses. Adam worked in the domain of yeast functional genomics. The Robot Scientist `Eve' was originally developed to automate early-stage drug development, with specific application to neglected tropical disease such as malaria, African sleeping sickness, etc. We are now adapting Eve to work with on cancer. We are also teaching Eve to autonomously extract information from the scientific literature.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Russel Howe of team Survey, center, works on a laptop to prepare the team's robot for a demonstration run after the team's robot failed to leave the starting platform during it's attempt at the level two challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Russel Howe of team Survey speaks with Sample Return Robot Challenge staff members after the team's robot failed to leave the starting platform during it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Kenneth Stafford, Assistant Director of Robotics Engineering and Director of the Robotics Resource Center at the Worcester Polytechnic Institute (WPI), verifies the location of the target sample during the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
[Service robots in elderly care. Possible application areas and current state of developments].
Graf, B; Heyer, T; Klein, B; Wallhoff, F
2013-08-01
The term "Service robotics" describes semi- or fully autonomous technical systems able to perform services useful to the well-being of humans. Service robots have the potential to support and disburden both persons in need of care as well as nursing care staff. In addition, they can be used in prevention and rehabilitation in order to reduce or avoid the need for help. Products currently available to support people in domestic environments are mainly cleaning or remote-controlled communication robots. Examples of current research activities are the (further) development of mobile robots as advanced communication assistants or the development of (semi) autonomous manipulation aids and multifunctional household assistants. Transport robots are commonly used in many hospitals. In nursing care facilities, the first evaluations have already been made. So-called emotional robots are now sold as products and can be used for therapeutic, occupational, or entertainment activities.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Sam Ortega, NASA program manager for Centennial Challenges, is seen during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Manifold traversing as a model for learning control of autonomous robots
NASA Technical Reports Server (NTRS)
Szakaly, Zoltan F.; Schenker, Paul S.
1992-01-01
This paper describes a recipe for the construction of control systems that support complex machines such as multi-limbed/multi-fingered robots. The robot has to execute a task under varying environmental conditions and it has to react reasonably when previously unknown conditions are encountered. Its behavior should be learned and/or trained as opposed to being programmed. The paper describes one possible method for organizing the data that the robot has learned by various means. This framework can accept useful operator input even if it does not fully specify what to do, and can combine knowledge from autonomous, operator assisted and programmed experiences.
Non-destructive inspection in industrial equipment using robotic mobile manipulation
NASA Astrophysics Data System (ADS)
Maurtua, Iñaki; Susperregi, Loreto; Ansuategui, Ander; Fernández, Ane; Ibarguren, Aitor; Molina, Jorge; Tubio, Carlos; Villasante, Cristobal; Felsch, Torsten; Pérez, Carmen; Rodriguez, Jorge R.; Ghrissi, Meftah
2016-05-01
MAINBOT project has developed service robots based applications to autonomously execute inspection tasks in extensive industrial plants in equipment that is arranged horizontally (using ground robots) or vertically (climbing robots). The industrial objective has been to provide a means to help measuring several physical parameters in multiple points by autonomous robots, able to navigate and climb structures, handling non-destructive testing sensors. MAINBOT has validated the solutions in two solar thermal plants (cylindrical-parabolic collectors and central tower), that are very demanding from mobile manipulation point of view mainly due to the extension (e.g. a thermal solar plant of 50Mw, with 400 hectares, 400.000 mirrors, 180 km of absorber tubes, 140m height tower), the variability of conditions (outdoor, day-night), safety requirements, etc. Once the technology was validated in simulation, the system was deployed in real setups and different validation tests carried out. In this paper two of the achievements related with the ground mobile inspection system are presented: (1) Autonomous navigation localization and planning algorithms to manage navigation in huge extensions and (2) Non-Destructive Inspection operations: thermography based detection algorithms to provide automatic inspection abilities to the robots.
Motor-response learning at a process control panel by an autonomous robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; de Saussure, G.; Lyness, E.
1988-01-01
The Center for Engineering Systems Advanced Research (CESAR) was founded at Oak Ridge National Laboratory (ORNL) by the Department of Energy's Office of Energy Research/Division of Engineering and Geoscience (DOE-OER/DEG) to conduct basic research in the area of intelligent machines. Therefore, researchers at the CESAR Laboratory are engaged in a variety of research activities in the field of machine learning. In this paper, we describe our approach to a class of machine learning which involves motor response acquisition using feedback from trial-and-error learning. Our formulation is being experimentally validated using an autonomous robot, learning tasks of control panel monitoring andmore » manipulation for effect process control. The CLIPS Expert System and the associated knowledge base used by the robot in the learning process, which reside in a hypercube computer aboard the robot, are described in detail. Benchmark testing of the learning process on a robot/control panel simulation system consisting of two intercommunicating computers is presented, along with results of sample problems used to train and test the expert system. These data illustrate machine learning and the resulting performance improvement in the robot for problems similar to, but not identical with, those on which the robot was trained. Conclusions are drawn concerning the learning problems, and implications for future work on machine learning for autonomous robots are discussed. 16 refs., 4 figs., 1 tab.« less
Multi-agent robotic systems and applications for satellite missions
NASA Astrophysics Data System (ADS)
Nunes, Miguel A.
A revolution in the space sector is happening. It is expected that in the next decade there will be more satellites launched than in the previous sixty years of space exploration. Major challenges are associated with this growth of space assets such as the autonomy and management of large groups of satellites, in particular with small satellites. There are two main objectives for this work. First, a flexible and distributed software architecture is presented to expand the possibilities of spacecraft autonomy and in particular autonomous motion in attitude and position. The approach taken is based on the concept of distributed software agents, also referred to as multi-agent robotic system. Agents are defined as software programs that are social, reactive and proactive to autonomously maximize the chances of achieving the set goals. Part of the work is to demonstrate that a multi-agent robotic system is a feasible approach for different problems of autonomy such as satellite attitude determination and control and autonomous rendezvous and docking. The second main objective is to develop a method to optimize multi-satellite configurations in space, also known as satellite constellations. This automated method generates new optimal mega-constellations designs for Earth observations and fast revisit times on large ground areas. The optimal satellite constellation can be used by researchers as the baseline for new missions. The first contribution of this work is the development of a new multi-agent robotic system for distributing the attitude determination and control subsystem for HiakaSat. The multi-agent robotic system is implemented and tested on the satellite hardware-in-the-loop testbed that simulates a representative space environment. The results show that the newly proposed system for this particular case achieves an equivalent control performance when compared to the monolithic implementation. In terms on computational efficiency it is found that the multi-agent robotic system has a consistent lower CPU load of 0.29 +/- 0.03 compared to 0.35 +/- 0.04 for the monolithic implementation, a 17.1 % reduction. The second contribution of this work is the development of a multi-agent robotic system for the autonomous rendezvous and docking of multiple spacecraft. To compute the maneuvers guidance, navigation and control algorithms are implemented as part of the multi-agent robotic system. The navigation and control functions are implemented using existing algorithms, but one important contribution of this section is the introduction of a new six degrees of freedom guidance method which is part of the guidance, navigation and control architecture. This new method is an explicit solution to the guidance problem, and is particularly useful for real time guidance for attitude and position, as opposed to typical guidance methods which are based on numerical solutions, and therefore are computationally intensive. A simulation scenario is run for docking four CubeSats deployed radially from a launch vehicle. Considering fully actuated CubeSats, the simulations show docking maneuvers that are successfully completed within 25 minutes which is approximately 30% of a full orbital period in low earth orbit. The final section investigates the problem of optimization of satellite constellations for fast revisit time, and introduces a new method to generate different constellation configurations that are evaluated with a genetic algorithm. Two case studies are presented. The first is the optimization of a constellation for rapid coverage of the oceans of the globe in 24 hours or less. Results show that for an 80 km sensor swath width 50 satellites are required to cover the oceans with a 24 hour revisit time. The second constellation configuration study focuses on the optimization for the rapid coverage of the North Atlantic Tracks for air traffic monitoring in 3 hours or less. The results show that for a fixed swath width of 160 km and for a 3 hour revisit time 52 satellites are required.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; Brooks, Alexander J.-W.; Tarbell, Mark A.; Dohm, James M.
2017-05-01
Autonomous reconnaissance missions are called for in extreme environments, as well as in potentially hazardous (e.g., the theatre, disaster-stricken areas, etc.) or inaccessible operational areas (e.g., planetary surfaces, space). Such future missions will require increasing degrees of operational autonomy, especially when following up on transient events. Operational autonomy encompasses: (1) Automatic characterization of operational areas from different vantages (i.e., spaceborne, airborne, surface, subsurface); (2) automatic sensor deployment and data gathering; (3) automatic feature extraction including anomaly detection and region-of-interest identification; (4) automatic target prediction and prioritization; (5) and subsequent automatic (re-)deployment and navigation of robotic agents. This paper reports on progress towards several aspects of autonomous C4ISR systems, including: Caltech-patented and NASA award-winning multi-tiered mission paradigm, robotic platform development (air, ground, water-based), robotic behavior motifs as the building blocks for autonomous tele-commanding, and autonomous decision making based on a Caltech-patented framework comprising sensor-data-fusion (feature-vectors), anomaly detection (clustering and principal component analysis), and target prioritization (hypothetical probing).
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Team AERO, from the Worcester Polytechnic Institute (WPI) transports their robot to the competition field for the level one of the competition during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Robots that will be competing in the Level one competition are seen as they sit in impound prior to the start of competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Ahti Heinla, left, and Sulo Kallas, right, from Estonia, prepare team KuuKulgur's robot for the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
A sample can be seen on the competition field as the team Survey robot conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Jascha Little of team Survey is seen as he follows the teams robot as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The University of California Santa Cruz Rover Team poses for a picture with their robot after attempting the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
The University of California Santa Cruz Rover Team's robot is seen prior to starting it's second attempt at the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The Oregon State University Mars Rover Team poses for a picture with their robot following their attempt at the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Jim Rothrock, left, and Carrie Johnson, right, of the Wunderkammer Laboratory team pose for a picture with their robot after attempting the level one competition during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
The Oregon State University Mars Rover Team follows their robot on the practice field during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The Oregon State University Mars Rover Team is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
Jerry Waechter of team Middleman from Dunedin, Florida, speaks about his team's robot, Ro-Bear, as it makes it attempt at the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
The Oregon State University Mars Rover Team, from Corvallis, Oregon, follows their robot on the practice field during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The Oregon State University Mars Rover Team is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
NASA Astrophysics Data System (ADS)
Narayan Ray, Dip; Majumder, Somajyoti
2014-07-01
Several attempts have been made by the researchers around the world to develop a number of autonomous exploration techniques for robots. But it has been always an important issue for developing the algorithm for unstructured and unknown environments. Human-like gradual Multi-agent Q-leaming (HuMAQ) is a technique developed for autonomous robotic exploration in unknown (and even unimaginable) environments. It has been successfully implemented in multi-agent single robotic system. HuMAQ uses the concept of Subsumption architecture, a well-known Behaviour-based architecture for prioritizing the agents of the multi-agent system and executes only the most common action out of all the different actions recommended by different agents. Instead of using new state-action table (Q-table) each time, HuMAQ uses the immediate past table for efficient and faster exploration. The proof of learning has also been established both theoretically and practically. HuMAQ has the potential to be used in different and difficult situations as well as applications. The same architecture has been modified to use for multi-robot exploration in an environment. Apart from all other existing agents used in the single robotic system, agents for inter-robot communication and coordination/ co-operation with the other similar robots have been introduced in the present research. Current work uses a series of indigenously developed identical autonomous robotic systems, communicating with each other through ZigBee protocol.
Technology transfer: Imaging tracker to robotic controller
NASA Technical Reports Server (NTRS)
Otaguro, M. S.; Kesler, L. O.; Land, Ken; Erwin, Harry; Rhoades, Don
1988-01-01
The transformation of an imaging tracker to a robotic controller is described. A multimode tracker was developed for fire and forget missile systems. The tracker locks on to target images within an acquisition window using multiple image tracking algorithms to provide guidance commands to missile control systems. This basic tracker technology is used with the addition of a ranging algorithm based on sizing a cooperative target to perform autonomous guidance and control of a platform for an Advanced Development Project on automation and robotics. A ranging tracker is required to provide the positioning necessary for robotic control. A simple functional demonstration of the feasibility of this approach was performed and described. More realistic demonstrations are under way at NASA-JSC. In particular, this modified tracker, or robotic controller, will be used to autonomously guide the Man Maneuvering Unit (MMU) to targets such as disabled astronauts or tools as part of the EVA Retriever efforts. It will also be used to control the orbiter's Remote Manipulator Systems (RMS) in autonomous approach and positioning demonstrations. These efforts will also be discussed.
Daluja, Sachin; Golenberg, Lavie; Cao, Alex; Pandya, Abhilash K; Auner, Gregory W; Klein, Michael D
2009-01-01
Robotic surgery has gradually gained acceptance due to its numerous advantages such as tremor filtration, increased dexterity and motion scaling. There remains, however, a significant scope for improvement, especially in the areas of surgeon-robot interface and autonomous procedures. Previous studies have attempted to identify factors affecting a surgeon's performance in a master-slave robotic system by tracking hand movements. These studies relied on conventional optical or magnetic tracking systems, making their use impracticable in the operating room. This study concentrated on building an intrinsic movement capture platform using microcontroller based hardware wired to a surgical robot. Software was developed to enable tracking and analysis of hand movements while surgical tasks were performed. Movement capture was applied towards automated movements of the robotic instruments. By emulating control signals, recorded surgical movements were replayed by the robot's end-effectors. Though this work uses a surgical robot as the platform, the ideas and concepts put forward are applicable to telerobotic systems in general.
Autonomous stair-climbing with miniature jumping robots.
Stoeter, Sascha A; Papanikolopoulos, Nikolaos
2005-04-01
The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.
Embodied cognition for autonomous interactive robots.
Hoffman, Guy
2012-10-01
In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human-robot interaction based on recent psychological and neurological findings. Copyright © 2012 Cognitive Science Society, Inc.
Socially assistive robotics for post-stroke rehabilitation
Matarić, Maja J; Eriksson, Jon; Feil-Seifer, David J; Winstein, Carolee J
2007-01-01
Background Although there is a great deal of success in rehabilitative robotics applied to patient recovery post stroke, most of the research to date has dealt with providing physical assistance. However, new rehabilitation studies support the theory that not all therapy need be hands-on. We describe a new area, called socially assistive robotics, that focuses on non-contact patient/user assistance. We demonstrate the approach with an implemented and tested post-stroke recovery robot and discuss its potential for effectiveness. Results We describe a pilot study involving an autonomous assistive mobile robot that aids stroke patient rehabilitation by providing monitoring, encouragement, and reminders. The robot navigates autonomously, monitors the patient's arm activity, and helps the patient remember to follow a rehabilitation program. We also show preliminary results from a follow-up study that focused on the role of robot physical embodiment in a rehabilitation context. Conclusion We outline and discuss future experimental designs and factors toward the development of effective socially assistive post-stroke rehabilitation robots. PMID:17309795
NASA Technical Reports Server (NTRS)
Hebert, Paul; Ma, Jeremy; Borders, James; Aydemir, Alper; Bajracharya, Max; Hudson, Nicolas; Shankar, Krishna; Karumanchi, Sisir; Douillard, Bertrand; Burdick, Joel
2015-01-01
The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called "supervised-autonomy" is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a "Supervised Remote Robot with Guided Autonomy and Teleoperation" (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of "behaviors" to chain together sequences of "actions" for the robot to perform which is then executed real time.
Imitative Robotic Control: The Puppet Master
2014-07-09
puppet style control device and the lessons learned while implementing such a device. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17...mission to be completed in a quick, accurate and efficient manner. This paper outlines the potential features of a puppet style control device and the...lessons learned while implementing such a device. INTRODUCTION As ground robotics moves towards autonomous and semi- autonomous operations, the
Autonomous intelligent cars: proof that the EPSRC Principles are future-proof
NASA Astrophysics Data System (ADS)
de Cock Buning, Madeleine; de Bruin, Roeland
2017-07-01
Principle 2 of the EPSRC's principles of robotics (AISB workshop on Principles of Robotics, 2016) proves to be future proof when applied to the current state of the art of law and technology surrounding autonomous intelligent cars (AICs). Humans, not AICS, are responsible agents. AICs should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy by design. It will show that some legal questions arising from autonomous intelligent driving technology can be answered by the technology itself.
Autonomous mobile robotic system for supporting counterterrorist and surveillance operations
NASA Astrophysics Data System (ADS)
Adamczyk, Marek; Bulandra, Kazimierz; Moczulski, Wojciech
2017-10-01
Contemporary research on mobile robots concerns applications to counterterrorist and surveillance operations. The goal is to develop systems that are capable of supporting the police and special forces by carrying out such operations. The paper deals with a dedicated robotic system for surveillance of large objects such as airports, factories, military bases, and many others. The goal is to trace unauthorised persons who try to enter to the guarded area, document the intrusion and report it to the surveillance centre, and then warn the intruder by sound messages and eventually subdue him/her by stunning through acoustic effect of great power. The system consists of several parts. An armoured four-wheeled robot assures required mobility of the system. The robot is equipped with a set of sensors including 3D mapping system, IR and video cameras, and microphones. It communicates with the central control station (CCS) by means of a wideband wireless encrypted system. A control system of the robot can operate autonomously, and under remote control. In the autonomous mode the robot follows the path planned by the CCS. Once an intruder has been detected, the robot can adopt its plan to allow tracking him/her. Furthermore, special procedures of treatment of the intruder are applied including warning about the breach of the border of the protected area, and incapacitation of an appropriately selected very loud sound until a patrol of guards arrives. Once getting stuck the robot can contact the operator who can remotely solve the problem the robot is faced with.
Leader-follower function for autonomous military convoys
NASA Astrophysics Data System (ADS)
Vasseur, Laurent; Lecointe, Olivier; Dento, Jerome; Cherfaoui, Nourrdine; Marion, Vincent; Morillon, Joel G.
2004-09-01
The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes which can provide an immediate "operational added value." The paper details the "robotic convoy" theme (named TEL1), which main purpose is to develop a robotic leader-follower function so that several unmanned vehicles can autonomously follow teleoperated, autonomous or on-board driven leader. Two modes have been implemented: Perceptive follower: each autonomous follower anticipates the trajectory of the vehicle in front of it, thanks to a dedicated perception equipment. This mode is mainly based on the use of perceptive data, without any communication link between leader and follower (to lower the cost of future mass development and extend the operational capabilities). Delayed follower: the leader records its path and transmits it to the follower; the follower is able to follow the recorded trajectory again at any delayed time. This mode uses localization data got from inertial measurements. The paper presents both modes with detailed algorithms and the results got from the military acceptance tests performed on wheeled 4x4 vehicles (DARDS French ATD).
Welding torch trajectory generation for hull joining using autonomous welding mobile robot
NASA Astrophysics Data System (ADS)
Hascoet, J. Y.; Hamilton, K.; Carabin, G.; Rauch, M.; Alonso, M.; Ares, E.
2012-04-01
Shipbuilding processes involve highly dangerous manual welding operations. Welding of ship hulls presents a hazardous environment for workers. This paper describes a new robotic system, developed by the SHIPWELD consortium, that moves autonomously on the hull and automatically executes the required welding processes. Specific focus is placed on the trajectory control of such a system and forms the basis for the discussion in this paper. It includes a description of the robotic hardware design as well as some methodology used to establish the torch trajectory control.
A fault-tolerant intelligent robotic control system
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Tso, Kam Sing
1993-01-01
This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.
2014-03-14
CAPE CANAVERAL, Fla. – A child gets an up-close look at Charli, an autonomous walking robot developed by Virginia Tech Robotics, during the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
NASA Astrophysics Data System (ADS)
Lane, Gerald R.
1999-07-01
To provide an overview of Tank-Automotive Robotics. The briefing will contain program overviews & inter-relationships and technology challenges of TARDEC managed unmanned and robotic ground vehicle programs. Specific emphasis will focus on technology developments/approaches to achieve semi- autonomous operation and inherent chassis mobility features. Programs to be discussed include: DemoIII Experimental Unmanned Vehicle (XUV), Tactical Mobile Robotics (TMR), Intelligent Mobility, Commanders Driver Testbed, Collision Avoidance, International Ground Robotics Competition (ICGRC). Specifically, the paper will discuss unique exterior/outdoor challenges facing the IGRC competing teams and the synergy created between the IGRC and ongoing DoD semi-autonomous Unmanned Ground Vehicle and DoT Intelligent Transportation System programs. Sensor and chassis approaches to meet the IGRC challenges and obstacles will be shown and discussed. Shortfalls in performance to meet the IGRC challenges will be identified.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
The University of California Santa Cruz Rover Team prepares their rover for the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Worcester Polytechnic Institute (WPI) President Laurie Leshin, speaks at a breakfast opening the TouchTomorrow Festival, held in conjunction with the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
David Miller, NASA Chief Technologist, speaks at a breakfast opening the TouchTomorrow Festival, held in conjunction with the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-11
The entrance to Institute Park is seen during the level one challenge as during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Sam Ortega, NASA Centennial Challenges Program Manager, speaks at a breakfast opening the TouchTomorrow Festival, held in conjunction with the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
James Leopore, of team Fetch, from Alexandria, Virginia, speaks with judges as he prepares for the NASA 2014 Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Fetch is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Social Robotics in Therapy of Apraxia of Speech
Alonso-Martín, Fernando
2018-01-01
Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction. PMID:29713440
A robotic vision system to measure tree traits
USDA-ARS?s Scientific Manuscript database
The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...
Speed control for a mobile robot
NASA Astrophysics Data System (ADS)
Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.
Planning and Execution: The Spirit of Opportunity for Robust Autonomous Systems
NASA Technical Reports Server (NTRS)
Muscettola, Nicola
2004-01-01
One of the most exciting endeavors pursued by human kind is the search for life in the Solar System and the Universe at large. NASA is leading this effort by designing, deploying and operating robotic systems that will reach planets, planet moons, asteroids and comets searching for water, organic building blocks and signs of past or present microbial life. None of these missions will be achievable without substantial advances in.the design, implementation and validation of autonomous control agents. These agents must be capable of robustly controlling a robotic explorer in a hostile environment with very limited or no communication with Earth. The talk focuses on work pursued at the NASA Ames Research center ranging from basic research on algorithm to deployed mission support systems. We will start by discussing how planning and scheduling technology derived from the Remote Agent experiment is being used daily in the operations of the Spirit and Opportunity rovers. Planning and scheduling is also used as the fundamental paradigm at the core of our research in real-time autonomous agents. In particular, we will describe our efforts in the Intelligent Distributed Execution Architecture (IDEA), a multi-agent real-time architecture that exploits artificial intelligence planning as the core reasoning engine of an autonomous agent. We will also describe how the issue of plan robustness at execution can be addressed by novel constraint propagation algorithms capable of giving the tightest exact bounds on resource consumption or all possible executions of a flexible plan.
2013-10-29
COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...based on contextual information, 3) develop vision-based techniques for learning of contextual information, and detection and identification of...that takes into account many possible contexts. The probability distributions of these contexts will be learned from existing databases on common sense
A Face Attention Technique for a Robot Able to Interpret Facial Expressions
NASA Astrophysics Data System (ADS)
Simplício, Carlos; Prado, José; Dias, Jorge
Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.
Control of a free-flying robot manipulator system
NASA Technical Reports Server (NTRS)
Alexander, H.
1986-01-01
The development of and test control strategies for self-contained, autonomous free flying space robots are discussed. Such a robot would perform operations in space similar to those currently handled by astronauts during extravehicular activity (EVA). Use of robots should reduce the expense and danger attending EVA both by providing assistance to astronauts and in many cases by eliminating altogether the need for human EVA, thus greatly enhancing the scope and flexibility of space assembly and repair activities. The focus of the work is to develop and carry out a program of research with a series of physical Satellite Robot Simulator Vehicles (SRSV's), two-dimensionally freely mobile laboratory models of autonomous free-flying space robots such as might perform extravehicular functions associated with operation of a space station or repair of orbiting satellites. It is planned, in a later phase, to extend the research to three dimensions by carrying out experiments in the Space Shuttle cargo bay.
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Editor)
1990-01-01
Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Members of the Mountaineers team from West Virginia University celebrate after their robot returned to the starting platform after picking up the sample during a rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-10
A pair of Worcester Polytechnic Institute (WPI) students walk past a pair of team KuuKulgur's robots on the campus quad, during a final tuneup before the start of competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team KuuKulgur is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.
Deng, Fucheng; Zhu, Xiaorui; He, Chao
2017-09-13
Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.
NASA Astrophysics Data System (ADS)
Shah, Hitesh K.; Bahl, Vikas; Martin, Jason; Flann, Nicholas S.; Moore, Kevin L.
2002-07-01
In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) have been funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). One among the several out growths of this work has been the development of a grammar-based approach to intelligent behavior generation for commanding autonomous robotic vehicles. In this paper we describe the use of this grammar for enabling autonomous behaviors. A supervisory task controller (STC) sequences high-level action commands (taken from the grammar) to be executed by the robot. It takes as input a set of goals and a partial (static) map of the environment and produces, from the grammar, a flexible script (or sequence) of the high-level commands that are to be executed by the robot. The sequence is derived by a planning function that uses a graph-based heuristic search (A* -algorithm). Each action command has specific exit conditions that are evaluated by the STC following each task completion or interruption (in the case of disturbances or new operator requests). Depending on the system's state at task completion or interruption (including updated environmental and robot sensor information), the STC invokes a reactive response. This can include sequencing the pending tasks or initiating a re-planning event, if necessary. Though applicable to a wide variety of autonomous robots, an application of this approach is demonstrated via simulations of ODIS, an omni-directional inspection system developed for security applications.
An Efficient Model-Based Image Understanding Method for an Autonomous Vehicle.
1997-09-01
The problem discussed in this dissertation is the development of an efficient method for visual navigation of autonomous vehicles . The approach is to... autonomous vehicles . Thus the new method is implemented as a component of the image-understanding system in the autonomous mobile robot Yamabico-11 at
ODYSSEUS autonomous walking robot: The leg/arm design
NASA Technical Reports Server (NTRS)
Bourbakis, N. G.; Maas, M.; Tascillo, A.; Vandewinckel, C.
1994-01-01
ODYSSEUS is an autonomous walking robot, which makes use of three wheels and three legs for its movement in the free navigation space. More specifically, it makes use of its autonomous wheels to move around in an environment where the surface is smooth and not uneven. However, in the case that there are small height obstacles, stairs, or small height unevenness in the navigation environment, the robot makes use of both wheels and legs to travel efficiently. In this paper we present the detailed hardware design and the simulated behavior of the extended leg/arm part of the robot, since it plays a very significant role in the robot actions (movements, selection of objects, etc.). In particular, the leg/arm consists of three major parts: The first part is a pipe attached to the robot base with a flexible 3-D joint. This pipe has a rotated bar as an extended part, which terminates in a 3-D flexible joint. The second part of the leg/arm is also a pipe similar to the first. The extended bar of the second part ends at a 2-D joint. The last part of the leg/arm is a clip-hand. It is used for selecting several small weight and size objects, and when it is in a 'closed' mode, it is used as a supporting part of the robot leg. The entire leg/arm part is controlled and synchronized by a microcontroller (68CH11) attached to the robot base.
An architecture for an autonomous learning robot
NASA Technical Reports Server (NTRS)
Tillotson, Brian
1988-01-01
An autonomous learning device must solve the example bounding problem, i.e., it must divide the continuous universe into discrete examples from which to learn. We describe an architecture which incorporates an example bounder for learning. The architecture is implemented in the GPAL program. An example run with a real mobile robot shows that the program learns and uses new causal, qualitative, and quantitative relationships.
Embodied Computation: An Active-Learning Approach to Mobile Robotics Education
ERIC Educational Resources Information Center
Riek, L. D.
2013-01-01
This paper describes a newly designed upper-level undergraduate and graduate course, Autonomous Mobile Robots. The course employs active, cooperative, problem-based learning and is grounded in the fundamental computational problems in mobile robotics defined by Dudek and Jenkin. Students receive a broad survey of robotics through lectures, weekly…
NASA's Intelligent Robotics Group
2017-01-06
Shareable video highlighting the Intelligent Robotics Group's 25 years of experience developing tools to allow humans and robots to work as teammates. Highlights the VERVE software, which allows researchers to see a 3D representation of the robot's world and mentions how Nissan is using a version of VERVE in the autonomous vehicle research.
Robots as Language Learning Tools
ERIC Educational Resources Information Center
Collado, Ericka
2017-01-01
Robots are machines that resemble different forms, usually those of humans or animals, that can perform preprogrammed or autonomous tasks (Robot, n.d.). With the emergence of STEM programs, there has been a rise in the use of robots in educational settings. STEM programs are those where students study science, technology, engineering and…
NASA Astrophysics Data System (ADS)
Laird, John E.
2009-05-01
Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the recent integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training environments and interactive computer games. For development and testing in robotic virtual environments, Soar interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to Soar that add new memories and new non-symbolic reasoning to Soar's original symbolic processing, which should significantly improve Soar abilities for control of robots. These extensions include episodic memory, semantic memory, reinforcement learning, and mental imagery. Episodic memory and semantic memory support the learning and recalling of prior events and situations as well as facts about the world. Reinforcement learning provides the ability of the system to tune its procedural knowledge - knowledge about how to do things. Mental imagery supports the use of diagrammatic and visual representations that are critical to support spatial reasoning. We speculate on the future of unmanned systems and the need for cognitive robotics to support dynamic instruction and taskability.
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930
Estimation of visual maps with a robot network equipped with vision sensors.
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
Autonomous undulatory serpentine locomotion utilizing body dynamics of a fluidic soft robot.
Onal, Cagdas D; Rus, Daniela
2013-06-01
Soft robotics offers the unique promise of creating inherently safe and adaptive systems. These systems bring man-made machines closer to the natural capabilities of biological systems. An important requirement to enable self-contained soft mobile robots is an on-board power source. In this paper, we present an approach to create a bio-inspired soft robotic snake that can undulate in a similar way to its biological counterpart using pressure for actuation power, without human intervention. With this approach, we develop an autonomous soft snake robot with on-board actuation, power, computation and control capabilities. The robot consists of four bidirectional fluidic elastomer actuators in series to create a traveling curvature wave from head to tail along its body. Passive wheels between segments generate the necessary frictional anisotropy for forward locomotion. It takes 14 h to build the soft robotic snake, which can attain an average locomotion speed of 19 mm s(-1).
Robotics development for the enhancement of space endeavors
NASA Astrophysics Data System (ADS)
Mauceri, A. J.; Clarke, Margaret M.
Telerobotics and robotics development activities to support NASA's goal of increasing opportunities in space commercialization and exploration are described. The Rockwell International activities center is using robotics to improve efficiency and safety in three related areas: remote control of autonomous systems, automated nondestructive evaluation of aspects of vehicle integrity, and the use of robotics in space vehicle ground reprocessing operations. In the first area, autonomous robotic control, Rockwell is using the control architecture, NASREM, as the foundation for the high level command of robotic tasks. In the second area, we have demonstrated the use of nondestructive evaluation (using acoustic excitation and lasers sensors) to evaluate the integrity of space vehicle surface material bonds, using Orbiter 102 as the test case. In the third area, Rockwell is building an automated version of the present manual tool used for Space Shuttle surface tile re-waterproofing. The tool will be integrated into an orbiter processing robot being developed by a KSC-led team.
Very fast motion planning for highly dexterous-articulated robots
NASA Technical Reports Server (NTRS)
Challou, Daniel J.; Gini, Maria; Kumar, Vipin
1994-01-01
Due to the inherent danger of space exploration, the need for greater use of teleoperated and autonomous robotic systems in space-based applications has long been apparent. Autonomous and semi-autonomous robotic devices have been proposed for carrying out routine functions associated with scientific experiments aboard the shuttle and space station. Finally, research into the use of such devices for planetary exploration continues. To accomplish their assigned tasks, all such autonomous and semi-autonomous devices will require the ability to move themselves through space without hitting themselves or the objects which surround them. In space it is important to execute the necessary motions correctly when they are first attempted because repositioning is expensive in terms of both time and resources (e.g., fuel). Finally, such devices will have to function in a variety of different environments. Given these constraints, a means for fast motion planning to insure the correct movement of robotic devices would be ideal. Unfortunately, motion planning algorithms are rarely used in practice because of their computational complexity. Fast methods have been developed for detecting imminent collisions, but the more general problem of motion planning remains computationally intractable. However, in this paper we show how the use of multicomputers and appropriate parallel algorithms can substantially reduce the time required to synthesize paths for dexterous articulated robots with a large number of joints. We have developed a parallel formulation of the Randomized Path Planner proposed by Barraquand and Latombe. We have shown that our parallel formulation is capable of formulating plans in a few seconds or less on various parallel architectures including: the nCUBE2 multicomputer with up to 1024 processors (nCUBE2 is a registered trademark of the nCUBE corporation), and a network of workstations.
NASA Astrophysics Data System (ADS)
Vestrand, W. T.; Theiler, J.; Woznia, P. R.
2004-10-01
The existence of rapidly slewing robotic telescopes and fast alert distribution via the Internet is revolutionizing our capability to study the physics of fast astrophysical transients. But the salient challenge that optical time domain surveys must conquer is mining the torrent of data to recognize important transients in a scene full of normal variations. Humans simply do not have the attention span, memory, or reaction time required to recognize fast transients and rapidly respond. Autonomous robotic instrumentation with the ability to extract pertinent information from the data stream in real time will therefore be essential for recognizing transients and commanding rapid follow-up observations while the ephemeral behavior is still present. Here we discuss how the development and integration of three technologies: (1) robotic telescope networks; (2) machine learning; and (3) advanced database technology, can enable the construction of smart robotic telescopes, which we loosely call ``thinking'' telescopes, capable of mining the sky in real time.
Autonomous caregiver following robotic wheelchair
NASA Astrophysics Data System (ADS)
Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary
2011-12-01
In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; de Saussure, G.; Spelt, P.F.
1988-01-01
This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioningmore » of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.« less
Lewis, Matthew; Cañamero, Lola
2016-10-01
We present a robot architecture and experiments to investigate some of the roles that pleasure plays in the decision making (action selection) process of an autonomous robot that must survive in its environment. We have conducted three sets of experiments to assess the effect of different types of pleasure-related versus unrelated to the satisfaction of physiological needs-under different environmental circumstances. Our results indicate that pleasure, including pleasure unrelated to need satisfaction, has value for homeostatic management in terms of improved viability and increased flexibility in adaptive behavior.
NASA Astrophysics Data System (ADS)
Pini, Giovanni; Tuci, Elio
2008-06-01
In biology/psychology, the capability of natural organisms to learn from the observation/interaction with conspecifics is referred to as social learning. Roboticists have recently developed an interest in social learning, since it might represent an effective strategy to enhance the adaptivity of a team of autonomous robots. In this study, we show that a methodological approach based on artifcial neural networks shaped by evolutionary computation techniques can be successfully employed to synthesise the individual and social learning mechanisms for robots required to learn a desired action (i.e. phototaxis or antiphototaxis).
Dual stage potential field method for robotic path planning
NASA Astrophysics Data System (ADS)
Singh, Pradyumna Kumar; Parida, Pramod Kumar
2018-04-01
Path planning for autonomous mobile robots are the root for all autonomous mobile systems. Various methods are used for optimization of path to be followed by the autonomous mobile robots. Artificial potential field based path planning method is one of the most used methods for the researchers. Various algorithms have been proposed using the potential field approach. But in most of the common problems are encounters while heading towards the goal or target. i.e. local minima problem, zero potential regions problem, complex shaped obstacles problem, target near obstacle problem. In this paper we provide a new algorithm in which two types of potential functions are used one after another. The former one is to use to get the probable points and later one for getting the optimum path. In this algorithm we consider only the static obstacle and goal.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Dorothy Rasco, NASA Deputy Associate Administrator for the Space Technology Mission Directorate, speaks at the TouchTomorrow Festival, held in conjunction with the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-12
Sam Ortega, NASA program manager for Centennial Challenges, is interviewed by a member of the media before the start of level two competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Vision-based mapping with cooperative robots
NASA Astrophysics Data System (ADS)
Little, James J.; Jennings, Cullen; Murray, Don
1998-10-01
Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.
Autonomous navigation system and method
Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID
2009-09-08
A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.
The use of multisensor data for robotic applications
NASA Technical Reports Server (NTRS)
Abidi, M. A.; Gonzalez, R. C.
1990-01-01
The feasibility of realistic autonomous space manipulation tasks using multisensory information is shown through two experiments involving a fluid interchange system and a module interchange system. In both cases, autonomous location of the mating element, autonomous location of the guiding light target, mating, and demating of the system were performed. Specifically, vision-driven techniques were implemented to determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. The robotic system was also equipped with a force/torque sensor that continuously monitored the six components of force and torque exerted on the end effector. Using vision, force, torque, proximity, and touch sensors, the two experiments were completed successfully and autonomously.
Evaluation of a Home Biomonitoring Autonomous Mobile Robot.
Dorronzoro Zubiete, Enrique; Nakahata, Keigo; Imamoglu, Nevrez; Sekine, Masashi; Sun, Guanghao; Gomez, Isabel; Yu, Wenwei
2016-01-01
Increasing population age demands more services in healthcare domain. It has been shown that mobile robots could be a potential solution to home biomonitoring for the elderly. Through our previous studies, a mobile robot system that is able to track a subject and identify his daily living activities has been developed. However, the system has not been tested in any home living scenarios. In this study we did a series of experiments to investigate the accuracy of activity recognition of the mobile robot in a home living scenario. The daily activities tested in the evaluation experiment include watching TV and sleeping. A dataset recorded by a distributed distance-measuring sensor network was used as a reference to the activity recognition results. It was shown that the accuracy is not consistent for all the activities; that is, mobile robot could achieve a high success rate in some activities but a poor success rate in others. It was found that the observation position of the mobile robot and subject surroundings have high impact on the accuracy of the activity recognition, due to the variability of the home living daily activities and their transitional process. The possibility of improvement of recognition accuracy has been shown too.
The Unified Behavior Framework for the Simulation of Autonomous Agents
2015-03-01
1980s, researchers have designed a variety of robot control architectures intending to imbue robots with some degree of autonomy. A recently developed ...Identification Friend or Foe viii THE UNIFIED BEHAVIOR FRAMEWORK FOR THE SIMULATION OF AUTONOMOUS AGENTS I. Introduction The development of autonomy has...room for research by utilizing methods like simulation and modeling that consume less time and fewer monetary resources. A recently developed reactive
Robotic reactions: delay-induced patterns in autonomous vehicle systems.
Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco
2010-02-01
Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.
Robotic reactions: Delay-induced patterns in autonomous vehicle systems
NASA Astrophysics Data System (ADS)
Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco
2010-02-01
Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.
Estimating time available for sensor fusion exception handling
NASA Astrophysics Data System (ADS)
Murphy, Robin R.; Rogers, Erika
1995-09-01
In previous work, we have developed a generate, test, and debug methodology for detecting, classifying, and responding to sensing failures in autonomous and semi-autonomous mobile robots. An important issue has arisen from these efforts: how much time is there available to classify the cause of the failure and determine an alternative sensing strategy before the robot mission must be terminated? In this paper, we consider the impact of time for teleoperation applications where a remote robot attempts to autonomously maintain sensing in the presence of failures yet has the option to contact the local for further assistance. Time limits are determined by using evidential reasoning with a novel generalization of Dempster-Shafer theory. Generalized Dempster-Shafer theory is used to estimate the time remaining until the robot behavior must be suspended because of uncertainty; this becomes the time limit on autonomous exception handling at the remote. If the remote cannot complete exception handling in this time or needs assistance, responsibility is passed to the local, while the remote assumes a `safe' state. An intelligent assistant then facilitates human intervention, either directing the remote without human assistance or coordinating data collection and presentation to the operator within time limits imposed by the mission. The impact of time on exception handling activities is demonstrated using video camera sensor data.
NASA Astrophysics Data System (ADS)
Taniguchi, Tadahiro; Sawaragi, Tetsuo
In this paper, a new machine-learning method, called Dual-Schemata model, is presented. Dual-Schemata model is a kind of self-organizational machine learning methods for an autonomous robot interacting with an unknown dynamical environment. This is based on Piaget's Schema model, that is a classical psychological model to explain memory and cognitive development of human beings. Our Dual-Schemata model is developed as a computational model of Piaget's Schema model, especially focusing on sensori-motor developing period. This developmental process is characterized by a couple of two mutually-interacting dynamics; one is a dynamics formed by assimilation and accommodation, and the other dynamics is formed by equilibration and differentiation. By these dynamics schema system enables an agent to act well in a real world. This schema's differentiation process corresponds to a symbol formation process occurring within an autonomous agent when it interacts with an unknown, dynamically changing environment. Experiment results obtained from an autonomous facial robot in which our model is embedded are presented; an autonomous facial robot becomes able to chase a ball moving in various ways without any rewards nor teaching signals from outside. Moreover, emergence of concepts on the target movements within a robot is shown and discussed in terms of fuzzy logics on set-subset inclusive relationships.
Rodríguez-Lera, Francisco J; Matellán-Olivera, Vicente; Conde-González, Miguel Á; Martín-Rico, Francisco
2018-05-01
Generation of autonomous behavior for robots is a general unsolved problem. Users perceive robots as repetitive tools that do not respond to dynamic situations. This research deals with the generation of natural behaviors in assistive service robots for dynamic domestic environments, particularly, a motivational-oriented cognitive architecture to generate more natural behaviors in autonomous robots. The proposed architecture, called HiMoP, is based on three elements: a Hierarchy of needs to define robot drives; a set of Motivational variables connected to robot needs; and a Pool of finite-state machines to run robot behaviors. The first element is inspired in Alderfer's hierarchy of needs, which specifies the variables defined in the motivational component. The pool of finite-state machine implements the available robot actions, and those actions are dynamically selected taking into account the motivational variables and the external stimuli. Thus, the robot is able to exhibit different behaviors even under similar conditions. A customized version of the "Speech Recognition and Audio Detection Test," proposed by the RoboCup Federation, has been used to illustrate how the architecture works and how it dynamically adapts and activates robots behaviors taking into account internal variables and external stimuli.
Reconfigurable assembly work station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yhu-Tin; Abell, Jeffrey A.; Spicer, John Patrick
A reconfigurable autonomous workstation includes a multi-faced superstructure including a horizontally-arranged frame section supported on a plurality of posts. The posts form a plurality of vertical faces arranged between adjacent pairs of the posts, the faces including first and second faces and a power distribution and position reference face. A controllable robotic arm suspends from the rectangular frame section, and a work table fixedly couples to the power distribution and position reference face. A plurality of conveyor tables are fixedly coupled to the work table including a first conveyor table through the first face and a second conveyor table throughmore » the second face. A vision system monitors the work table and each of the conveyor tables. A programmable controller monitors signal inputs from the vision system to identify and determine orientation of the component on the first conveyor table and control the robotic arm to execute an assembly task.« less
Three-dimensional motor schema based navigation
NASA Technical Reports Server (NTRS)
Arkin, Ronald C.
1989-01-01
Reactive schema-based navigation is possible in space domains by extending the methods developed for ground-based navigation found within the Autonomous Robot Architecture (AuRA). Reformulation of two dimensional motor schemas for three dimensional applications is a straightforward process. The manifold advantages of schema-based control persist, including modular development, amenability to distributed processing, and responsiveness to environmental sensing. Simulation results show the feasibility of this methodology for space docking operations in a cluttered work area.
Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed
NASA Technical Reports Server (NTRS)
Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles
2016-01-01
The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management.
3-D Vision Techniques for Autonomous Vehicles
1988-08-01
TITLE (Include Security Classification) W 3-D Vision Techniques for Autonomous Vehicles 12 PERSONAL AUTHOR(S) Martial Hebert, Takeo Kanade, inso Kweoni... Autonomous Vehicles Martial Hebert, Takeo Kanade, Inso Kweon CMU-RI-TR-88-12 The Robotics Institute Carnegie Mellon University Acession For Pittsburgh
JPL Robotics Technology Applicable to Agriculture
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol Gabriel; Kyte, L.
2008-01-01
This slide presentation describes several technologies that are developed for robotics that are applicable for agriculture. The technologies discussed are detection of humans to allow safe operations of autonomous vehicles, and vision guided robotic techniques for shoot selection, separation and transfer to growth media,
2011-03-01
past few years, including performance evaluation of emergency response robots , sensor systems on unmanned ground vehicles, speech-to-speech translation...emergency response robots ; intelligent systems; mixed palletizing, testing, simulation; robotic vehicle perception systems; search and rescue robots ...ranging from autonomous vehicles to urban search and rescue robots to speech translation and manufacturing systems. The evaluations have occurred in
Aerial Explorers and Robotic Ecosystems
NASA Technical Reports Server (NTRS)
Young, Larry A.; Pisanich, Greg
2004-01-01
A unique bio-inspired approach to autonomous aerial vehicle, a.k.a. aerial explorer technology is discussed. The work is focused on defining and studying aerial explorer mission concepts, both as an individual robotic system and as a member of a small robotic "ecosystem." Members of this robotic ecosystem include the aerial explorer, air-deployed sensors and robotic symbiotes, and other assets such as rovers, landers, and orbiters.
Soft Ultrathin Electronics Innervated Adaptive Fully Soft Robots.
Wang, Chengjun; Sim, Kyoseung; Chen, Jin; Kim, Hojin; Rao, Zhoulyu; Li, Yuhang; Chen, Weiqiu; Song, Jizhou; Verduzco, Rafael; Yu, Cunjiang
2018-03-01
Soft robots outperform the conventional hard robots on significantly enhanced safety, adaptability, and complex motions. The development of fully soft robots, especially fully from smart soft materials to mimic soft animals, is still nascent. In addition, to date, existing soft robots cannot adapt themselves to the surrounding environment, i.e., sensing and adaptive motion or response, like animals. Here, compliant ultrathin sensing and actuating electronics innervated fully soft robots that can sense the environment and perform soft bodied crawling adaptively, mimicking an inchworm, are reported. The soft robots are constructed with actuators of open-mesh shaped ultrathin deformable heaters, sensors of single-crystal Si optoelectronic photodetectors, and thermally responsive artificial muscle of carbon-black-doped liquid-crystal elastomer (LCE-CB) nanocomposite. The results demonstrate that adaptive crawling locomotion can be realized through the conjugation of sensing and actuation, where the sensors sense the environment and actuators respond correspondingly to control the locomotion autonomously through regulating the deformation of LCE-CB bimorphs and the locomotion of the robots. The strategy of innervating soft sensing and actuating electronics with artificial muscles paves the way for the development of smart autonomous soft robots. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Opfermann, Justin D.; Leonard, Simon; Decker, Ryan S.; Uebele, Nicholas A.; Bayne, Christopher E.; Joshi, Arjun S.; Krieger, Axel
2017-01-01
This paper specifies a surgical robot performing semi-autonomous electrosurgery for tumor resection and evaluates its accuracy using a visual servoing paradigm. We describe the design and integration of a novel, multi-degree of freedom electrosurgical tool for the smart tissue autonomous robot (STAR). Standardized line tests are executed to determine ideal cut parameters in three different types of porcine tissue. STAR is then programmed with the ideal cut setting for porcine tissue and compared against expert surgeons using open and laparoscopic techniques in a line cutting task. We conclude with a proof of concept demonstration using STAR to semi-autonomously resect pseudo-tumors in porcine tissue using visual servoing. When tasked to excise tumors with a consistent 4mm margin, STAR can semi-autonomously dissect tissue with an average margin of 3.67 mm and a standard deviation of 0.89mm. PMID:29503760
NASA Technical Reports Server (NTRS)
Whittaker, William; Dowling, Kevin
1994-01-01
Carnegie Mellon University's Autonomous Planetary Exploration Program (APEX) is currently building the Daedalus robot; a system capable of performing extended autonomous planetary exploration missions. Extended autonomy is an important capability because the continued exploration of the Moon, Mars and other solid bodies within the solar system will probably be carried out by autonomous robotic systems. There are a number of reasons for this - the most important of which are the high cost of placing a man in space, the high risk associated with human exploration and communication delays that make teleoperation infeasible. The Daedalus robot represents an evolutionary approach to robot mechanism design and software system architecture. Daedalus incorporates key features from a number of predecessor systems. Using previously proven technologies, the Apex project endeavors to encompass all of the capabilities necessary for robust planetary exploration. The Ambler, a six-legged walking machine was developed by CMU for demonstration of technologies required for planetary exploration. In its five years of life, the Ambler project brought major breakthroughs in various areas of robotic technology. Significant progress was made in: mechanism and control, by introducing a novel gait pattern (circulating gait) and use of orthogonal legs; perception, by developing sophisticated algorithms for map building; and planning, by developing and implementing the Task Control Architecture to coordinate tasks and control complex system functions. The APEX project is the successor of the Ambler project.
NASA Astrophysics Data System (ADS)
Whittaker, William; Dowling, Kevin
1994-03-01
Carnegie Mellon University's Autonomous Planetary Exploration Program (APEX) is currently building the Daedalus robot; a system capable of performing extended autonomous planetary exploration missions. Extended autonomy is an important capability because the continued exploration of the Moon, Mars and other solid bodies within the solar system will probably be carried out by autonomous robotic systems. There are a number of reasons for this - the most important of which are the high cost of placing a man in space, the high risk associated with human exploration and communication delays that make teleoperation infeasible. The Daedalus robot represents an evolutionary approach to robot mechanism design and software system architecture. Daedalus incorporates key features from a number of predecessor systems. Using previously proven technologies, the Apex project endeavors to encompass all of the capabilities necessary for robust planetary exploration. The Ambler, a six-legged walking machine was developed by CMU for demonstration of technologies required for planetary exploration. In its five years of life, the Ambler project brought major breakthroughs in various areas of robotic technology. Significant progress was made in: mechanism and control, by introducing a novel gait pattern (circulating gait) and use of orthogonal legs; perception, by developing sophisticated algorithms for map building; and planning, by developing and implementing the Task Control Architecture to coordinate tasks and control complex system functions. The APEX project is the successor of the Ambler project.
Explanation Capabilities for Behavior-Based Robot Control
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L.
2012-01-01
A recent study that evaluated issues associated with remote interaction with an autonomous vehicle within the framework of grounding found that missing contextual information led to uncertainty in the interpretation of collected data, and so introduced errors into the command logic of the vehicle. As the vehicles became more autonomous through the activation of additional capabilities, more errors were made. This is an inefficient use of the platform, since the behavior of remotely located autonomous vehicles didn't coincide with the "mental models" of human operators. One of the conclusions of the study was that there should be a way for the autonomous vehicles to describe what action they choose and why. Robotic agents with enough self-awareness to dynamically adjust the information conveyed back to the Operations Center based on a detail level component analysis of requests could provide this description capability. One way to accomplish this is to map the behavior base of the robot into a formal mathematical framework called a cost-calculus. A cost-calculus uses composition operators to build up sequences of behaviors that can then be compared to what is observed using well-known inference mechanisms.
Small Autonomous Air/Sea System Concepts for Coast Guard Missions
NASA Technical Reports Server (NTRS)
Young, Larry A.
2005-01-01
A number of small autonomous air/sea system concepts are outlined in this paper that support and enhance U.S. Coast Guard missions. These concepts draw significantly upon technology investments made by NASA in the area of uninhabited aerial vehicles and robotic/intelligent systems. Such concepts should be considered notional elements of a greater as-yet-not-defined robotic system-of-systems designed to enable unparalleled maritime safety and security.
2010-03-01
and charac- terize the actions taken by the soldier (e.g., running, walking, climbing stairs ). Real-time image capture and exchange N The ability of...multimedia information sharing among soldiers in the field, two-way speech translation systems, and autonomous robotic platforms. Key words: Emerging...soldiers in the field, two-way speech translation systems, and autonomous robotic platforms. It has been the foundation for 10 technology evaluations
Autonomous Robotic Inspection in Tunnels
NASA Astrophysics Data System (ADS)
Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.
2016-06-01
In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.
Gaussian Processes for Data-Efficient Learning in Robotics and Control.
Deisenroth, Marc Peter; Fox, Dieter; Rasmussen, Carl Edward
2015-02-01
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
Autonomous Navigation by a Mobile Robot
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand
2005-01-01
ROAMAN is a computer program for autonomous navigation of a mobile robot on a long (as much as hundreds of meters) traversal of terrain. Developed for use aboard a robotic vehicle (rover) exploring the surface of a remote planet, ROAMAN could also be adapted to similar use on terrestrial mobile robots. ROAMAN implements a combination of algorithms for (1) long-range path planning based on images acquired by mast-mounted, wide-baseline stereoscopic cameras, and (2) local path planning based on images acquired by body-mounted, narrow-baseline stereoscopic cameras. The long-range path-planning algorithm autonomously generates a series of waypoints that are passed to the local path-planning algorithm, which plans obstacle-avoiding legs between the waypoints. Both the long- and short-range algorithms use an occupancy-grid representation in computations to detect obstacles and plan paths. Maps that are maintained by the long- and short-range portions of the software are not shared because substantial localization errors can accumulate during any long traverse. ROAMAN is not guaranteed to generate an optimal shortest path, but does maintain the safety of the rover.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
NASA Astrophysics Data System (ADS)
Shane, David J.; Rufo, Michael A.; Berkemeier, Matthew D.; Alberts, Joel A.
2012-06-01
The Autonomous Urban Reconnaissance Ingress System (AURIS™) addresses a significant limitation of current military and first responder robotics technology: the inability of reconnaissance robots to open doors. Leveraging user testing as a baseline, the program has derived specifications necessary for military personnel to open doors with fielded UGVs (Unmanned Ground Vehicles), and evaluates the technology's impact on operational mission areas: duration, timing, and user patience in developing a tactically relevant, safe, and effective system. Funding is provided through the US ARMY Tank Automotive Research, Development and Engineering Center (TARDEC) and the project represents a leap forward in perception, autonomy, robotic implements, and coordinated payload operation in UGVs. This paper describes high level details of specification generation, status of the last phase of development, an advanced view of the system autonomy capability, and a short look ahead towards the ongoing work on this compelling and important technology.
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
HERMIES-3: A step toward autonomous mobility, manipulation, and perception
NASA Technical Reports Server (NTRS)
Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.
1989-01-01
HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.
Grounding Robot Autonomy in Emotion and Self-awareness
NASA Astrophysics Data System (ADS)
Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita
Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.
NASA Astrophysics Data System (ADS)
Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois
2010-01-01
After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.
Tegotae-based decentralised control scheme for autonomous gait transition of snake-like robots.
Kano, Takeshi; Yoshizawa, Ryo; Ishiguro, Akio
2017-08-04
Snakes change their locomotion patterns in response to the environment. This ability is a motivation for developing snake-like robots with highly adaptive functionality. In this study, a decentralised control scheme of snake-like robots that exhibited autonomous gait transition (i.e. the transition between concertina locomotion in narrow aisles and scaffold-based locomotion on unstructured terrains) was developed. Additionally, the control scheme was validated via simulations. A key insight revealed is that these locomotion patterns were not preprogrammed but emerged by exploiting Tegotae, a concept that describes the extent to which a perceived reaction matches a generated action. Unlike local reflexive mechanisms proposed previously, the Tegotae-based feedback mechanism enabled the robot to 'selectively' exploit environments beneficial for propulsion, and generated reasonable locomotion patterns. It is expected that the results of this study can form the basis to design robots that can work under unpredictable and unstructured environments.
Versteeg, Roelof J; Few, Douglas A; Kinoshita, Robert A; Johnson, Doug; Linda, Ondrej
2015-02-24
Methods, computer readable media, and apparatuses provide robotic explosive hazard detection. A robot intelligence kernel (RIK) includes a dynamic autonomy structure with two or more autonomy levels between operator intervention and robot initiative A mine sensor and processing module (ESPM) operating separately from the RIK perceives environmental variables indicative of a mine using subsurface perceptors. The ESPM processes mine information to determine a likelihood of a presence of a mine. A robot can autonomously modify behavior responsive to an indication of a detected mine. The behavior is modified between detection of mines, detailed scanning and characterization of the mine, developing mine indication parameters, and resuming detection. Real time messages are passed between the RIK and the ESPM. A combination of ESPM bound messages and RIK bound messages cause the robot platform to switch between modes including a calibration mode, the mine detection mode, and the mine characterization mode.
Versteeg, Roelof J.; Few, Douglas A.; Kinoshita, Robert A.; Johnson, Douglas; Linda, Ondrej
2015-12-15
Methods, computer readable media, and apparatuses provide robotic explosive hazard detection. A robot intelligence kernel (RIK) includes a dynamic autonomy structure with two or more autonomy levels between operator intervention and robot initiative A mine sensor and processing module (ESPM) operating separately from the RIK perceives environmental variables indicative of a mine using subsurface perceptors. The ESPM processes mine information to determine a likelihood of a presence of a mine. A robot can autonomously modify behavior responsive to an indication of a detected mine. The behavior is modified between detection of mines, detailed scanning and characterization of the mine, developing mine indication parameters, and resuming detection. Real time messages are passed between the RIK and the ESPM. A combination of ESPM bound messages and RIK bound messages cause the robot platform to switch between modes including a calibration mode, the mine detection mode, and the mine characterization mode.
Assessing the Impact of an Autonomous Robotics Competition for STEM Education
ERIC Educational Resources Information Center
Chung, C. J. ChanJin; Cartwright, Christopher; Cole, Matthew
2014-01-01
Robotics competitions for K-12 students are popular, but are students really learning and improving their STEM scores through robotics competitions? If not, why not? If they are, how much more effective is learning through competitions than traditional classes? Is there room for improvement? What is the best robotics competition model to maximize…
Artificial consciousness, artificial emotions, and autonomous robots.
Cardon, Alain
2006-12-01
Nowadays for robots, the notion of behavior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a very cultural concept, founding the main property of human beings, according to themselves. We propose to develop a computable transposition of the consciousness concepts into artificial brains, able to express emotions and consciousness facts. The production of such artificial brains allows the intentional and really adaptive behavior for the autonomous robots. Such a system managing the robot's behavior will be made of two parts: the first one computes and generates, in a constructivist manner, a representation for the robot moving in its environment, and using symbols and concepts. The other part achieves the representation of the previous one using morphologies in a dynamic geometrical way. The robot's body will be seen for itself as the morphologic apprehension of its material substrata. The model goes strictly by the notion of massive multi-agent's organizations with a morphologic control.
Improving mobile robot localization: grid-based approach
NASA Astrophysics Data System (ADS)
Yan, Junchi
2012-02-01
Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.
NASA Technical Reports Server (NTRS)
Chen, Alexander Y.
1990-01-01
Scientific research associates advanced robotic system (SRAARS) is an intelligent robotic system which has autonomous learning capability in geometric reasoning. The system is equipped with one global intelligence center (GIC) and eight local intelligence centers (LICs). It controls mainly sixteen links with fourteen active joints, which constitute two articulated arms, an extensible lower body, a vision system with two CCD cameras and a mobile base. The on-board knowledge-based system supports the learning controller with model representations of both the robot and the working environment. By consecutive verifying and planning procedures, hypothesis-and-test routines and learning-by-analogy paradigm, the system would autonomously build up its own understanding of the relationship between itself (i.e., the robot) and the focused environment for the purposes of collision avoidance, motion analysis and object manipulation. The intelligence of SRAARS presents a valuable technical advantage to implement robotic systems for space exploration and space station operations.
Engineering Sensorial Delay to Control Phototaxis and Emergent Collective Behaviors
NASA Astrophysics Data System (ADS)
Mijalkov, Mite; McDaniel, Austin; Wehr, Jan; Volpe, Giovanni
2016-01-01
Collective motions emerging from the interaction of autonomous mobile individuals play a key role in many phenomena, from the growth of bacterial colonies to the coordination of robotic swarms. For these collective behaviors to take hold, the individuals must be able to emit, sense, and react to signals. When dealing with simple organisms and robots, these signals are necessarily very elementary; e.g., a cell might signal its presence by releasing chemicals and a robot by shining light. An additional challenge arises because the motion of the individuals is often noisy; e.g., the orientation of cells can be altered by Brownian motion and that of robots by an uneven terrain. Therefore, the emphasis is on achieving complex and tunable behaviors from simple autonomous agents communicating with each other in robust ways. Here, we show that the delay between sensing and reacting to a signal can determine the individual and collective long-term behavior of autonomous agents whose motion is intrinsically noisy. We experimentally demonstrate that the collective behavior of a group of phototactic robots capable of emitting a radially decaying light field can be tuned from segregation to aggregation and clustering by controlling the delay with which they change their propulsion speed in response to the light intensity they measure. We track this transition to the underlying dynamics of this system, in particular, to the ratio between the robots' sensorial delay time and the characteristic time of the robots' random reorientation. Supported by numerics, we discuss how the same mechanism can be applied to control active agents, e.g., airborne drones, moving in a three-dimensional space. Given the simplicity of this mechanism, the engineering of sensorial delay provides a potentially powerful tool to engineer and dynamically tune the behavior of large ensembles of autonomous mobile agents; furthermore, this mechanism might already be at work within living organisms such as chemotactic cells.
Analysis of mutual assured destruction-like scenario with swarms of non-recallable autonomous robots
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2015-05-01
This paper considers the implications of the creation of an autonomous robotic fighting force without recall-ability which could serve as a deterrent to a `total war' magnitude attack. It discusses the technical considerations for this type of robotic system and the limited enhancements required to current technologies (particularly UAVs) needed to create such a system. Particular consideration is paid to how the introduction of this type of technology by one actor could create a need for reciprocal development. Also considered is the prospective utilization of this type of technology by non-state actors and the impact of this on state actors.
Obstacle Avoidance On Roadways Using Range Data
NASA Astrophysics Data System (ADS)
Dunlay, R. Terry; Morgenthaler, David G.
1987-02-01
This report describes range data based obstacle avoidance techniques developed for use on an autonomous road-following robot vehicle. The purpose of these techniques is to detect and locate obstacles present in a road environment for navigation of a robot vehicle equipped with an active laser-based range sensor. Techniques are presented for obstacle detection, obstacle location, and coordinate transformations needed in the construction of Scene Models (symbolic structures representing the 3-D obstacle boundaries used by the vehicle's Navigator for path planning). These techniques have been successfully tested on an outdoor robotic vehicle, the Autonomous Land Vehicle (ALV), at speeds up to 3.5 km/hour.
Experiments with a small behaviour controlled planetary rover
NASA Technical Reports Server (NTRS)
Miller, David P.; Desai, Rajiv S.; Gat, Erann; Ivlev, Robert; Loch, John
1993-01-01
A series of experiments that were performed on the Rocky 3 robot is described. Rocky 3 is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and depositing the sample in a container at its home base. The robot is programmed according to a reactive behavior control paradigm using the ALFA programming language. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mobile robot control systems. The code for Rocky 3 runs on an eight bit processor and uses about ten k of memory.
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.
1987-01-01
An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.
Effectiveness of Social Behaviors for Autonomous Wheelchair Robot to Support Elderly People in Japan
Shiomi, Masahiro; Iio, Takamasa; Kamei, Koji; Sharma, Chandraprakash; Hagita, Norihiro
2015-01-01
We developed a wheelchair robot to support the movement of elderly people and specifically implemented two functions to enhance their intention to use it: speaking behavior to convey place/location related information and speed adjustment based on individual preferences. Our study examines how the evaluations of our wheelchair robot differ when compared with human caregivers and a conventional autonomous wheelchair without the two proposed functions in a moving support context. 28 senior citizens participated in the experiment to evaluate three different conditions. Our measurements consisted of questionnaire items and the coding of free-style interview results. Our experimental results revealed that elderly people evaluated our wheelchair robot higher than the wheelchair without the two functions and the human caregivers for some items. PMID:25993038
Laboratory testing of candidate robotic applications for space
NASA Technical Reports Server (NTRS)
Purves, R. B.
1987-01-01
Robots have potential for increasing the value of man's presence in space. Some categories with potential benefit are: (1) performing extravehicular tasks like satellite and station servicing, (2) supporting the science mission of the station by manipulating experiment tasks, and (3) performing intravehicular activities which would be boring, tedious, exacting, or otherwise unpleasant for astronauts. An important issue in space robotics is selection of an appropriate level of autonomy. In broad terms three levels of autonomy can be defined: (1) teleoperated - an operator explicitly controls robot movement; (2) telerobotic - an operator controls the robot directly, but by high-level commands, without, for example, detailed control of trajectories; and (3) autonomous - an operator supplies a single high-level command, the robot does all necessary task sequencing and planning to satisfy the command. Researchers chose three projects for their exploration of technology and implementation issues in space robots, one each of the three application areas, each with a different level of autonomy. The projects were: (1) satellite servicing - teleoperated; (2) laboratory assistant - telerobotic; and (3) on-orbit inventory manager - autonomous. These projects are described and some results of testing are summarized.
Nasa's Ant-Inspired Swarmie Robots
NASA Technical Reports Server (NTRS)
Leucht, Kurt W.
2016-01-01
As humans push further beyond the grasp of earth, robotic missions in advance of human missions will play an increasingly important role. These robotic systems will find and retrieve valuable resources as part of an in-situ resource utilization (ISRU) strategy. They will need to be highly autonomous while maintaining high task performance levels. NASA Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots to be used as a ground-based research platform for ISRU missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in a previously unmapped environment and return those resources to a central site. This talk will guide the audience through the Swarmie robot project from its conception by students in a New Mexico research lab to its robot trials in an outdoor parking lot at NASA. The software technologies and techniques used on the project will be discussed, as well as various challenges and solutions that were encountered by the development team along the way.
Precharged Pneumatic Soft Actuators and Their Applications to Untethered Soft Robots.
Li, Yunquan; Chen, Yonghua; Ren, Tao; Li, Yingtian; Choi, Shiu Hong
2018-06-20
The past decade has witnessed tremendous progress in soft robotics. Unlike most pneumatic-based methods, we present a new approach to soft robot design based on precharged pneumatics (PCP). We propose a PCP soft bending actuator, which is actuated by precharged air pressure and retracted by inextensible tendons. By pulling or releasing the tendons, the air pressure in the soft actuator is modulated, and hence, its bending angle. The tendons serve in a way similar to pressure-regulating valves that are used in typical pneumatic systems. The linear motion of tendons is transduced into complex motion via the prepressurized bent soft actuator. Furthermore, since a PCP actuator does not need any gas supply, complicated pneumatic control systems used in traditional soft robotics are eliminated. This facilitates the development of compact untethered autonomous soft robots for various applications. Both theoretical modeling and experimental validation have been conducted on a sample PCP soft actuator design. A fully untethered autonomous quadrupedal soft robot and a soft gripper have been developed to demonstrate the superiority of the proposed approach over traditional pneumatic-driven soft robots.
Cold Regions Issues for Off-Road Autonomous Vehicles
2004-04-01
the operation of off-road autonomous vehicles . Low-temperature effects on lubricants, materials, and batteries can impair a robot’s ability to operate...demanding that off-road autonomous vehicles must be designed for and tested in cold regions if they are expected to operate there successfully.
Development of a soft untethered robot using artificial muscle actuators
NASA Astrophysics Data System (ADS)
Cao, Jiawei; Qin, Lei; Lee, Heow Pueh; Zhu, Jian
2017-04-01
Soft robots have attracted much interest recently, due to their potential capability to work effectively in unstructured environment. Soft actuators are key components in soft robots. Dielectric elastomer actuators are one class of soft actuators, which can deform in response to voltage. Dielectric elastomer actuators exhibit interesting attributes including large voltage-induced deformation and high energy density. These attributes make dielectric elastomer actuators capable of functioning as artificial muscles for soft robots. It is significant to develop untethered robots, since connecting the cables to external power sources greatly limits the robots' functionalities, especially autonomous movements. In this paper we develop a soft untethered robot based on dielectric elastomer actuators. This robot mainly consists of a deformable robotic body and two paper-based feet. The robotic body is essentially a dielectric elastomer actuator, which can expand or shrink at voltage on or off. In addition, the two feet can achieve adhesion or detachment based on the mechanism of electroadhesion. In general, the entire robotic system can be controlled by electricity or voltage. By optimizing the mechanical design of the robot (the size and weight of electric circuits), we put all these components (such as batteries, voltage amplifiers, control circuits, etc.) onto the robotic feet, and the robot is capable of realizing autonomous movements. Experiments are conducted to study the robot's locomotion. Finite element method is employed to interpret the deformation of dielectric elastomer actuators, and the simulations are qualitatively consistent with the experimental observations.
Space Robotics: AWIMR an Overview
NASA Technical Reports Server (NTRS)
Wagner, Rick
2006-01-01
This viewgraph presentation reviews the usages of Autonomous Walking Inspection and Maintenance Robots (AWIMR) in space. Some of the uses that these robots in support of space exploration can have are: inspection of a space craft, cleaning, astronaut assistance, assembly of a structure, repair of structures, and replenishment of supplies.
Small Body Exploration Technologies as Precursors for Interstellar Robotics
NASA Astrophysics Data System (ADS)
Noble, R. J.; Sykes, M. V.
The scientific activities undertaken to explore our Solar System will be very similar to those required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution, as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of the technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.
Small Body Exploration Technologies as Precursors for Interstellar Robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Robert; /SLAC; Sykes, Mark V.
The scientific activities undertaken to explore our Solar System will be the same as required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of themore » technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.« less
Autonomous Soft Robotic Fish Capable of Escape Maneuvers Using Fluidic Elastomer Actuators.
Marchese, Andrew D; Onal, Cagdas D; Rus, Daniela
2014-03-01
In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion.
Electrical and computer architecture of an autonomous Mars sample return rover prototype
NASA Astrophysics Data System (ADS)
Leslie, Caleb Thomas
Space truly is the final frontier. As man looks to explore beyond the confines of our planet, we use the lessons learned from traveling to the Moon and orbiting in the International Space Station, and we set our sights upon Mars. For decades, Martian probes consisting of orbiters, landers, and even robotic rovers have been sent to study Mars. Their discoveries have yielded a wealth of new scientific knowledge regarding the Martian environment and the secrets it holds. Armed with this knowledge, NASA and others have begun preparations to send humans to Mars with the ultimate goal of colonization and permanent human habitation. The ultimate success of any long term manned mission to Mars will require in situ resource utilization techniques and technologies to both support their stay and make a return trip to Earth viable. A sample return mission to Mars will play a pivotal role in developing these necessary technologies to ensure such an endeavor to be a successful one. This thesis describes an electrical and computer architecture for autonomous robotic applications. The architecture is one that is modular, scalable, and adaptable. These traits are achieved by maximizing commonality and reusability within modules that can be added, removed, or reconfigured within the system. This architecture, called the Modular Architecture for Autonomous Robotic Systems (MAARS), was implemented on the University of Alabama's Collection and Extraction Rover for Extraterrestrial Samples (CERES). The CERES rover competed in the 2016 NASA Sample Return Robot Challenge where robots were tasked with autonomously finding, collecting, and returning samples to the landing site.
Autonomous Rovers for Polar Science Campaigns
NASA Astrophysics Data System (ADS)
Lever, J. H.; Ray, L. E.; Williams, R. M.; Morlock, A. M.; Burzynski, A. M.
2012-12-01
We have developed and deployed two over-snow autonomous rovers able to conduct remote science campaigns on Polar ice sheets. Yeti is an 80-kg, four-wheel-drive (4WD) battery-powered robot with 3 - 4 hr endurance, and Cool Robot is a 60-kg 4WD solar-powered robot with unlimited endurance during Polar summers. Both robots navigate using GPS waypoint-following to execute pre-planned courses autonomously, and they can each carry or tow 20 - 160 kg instrument payloads over typically firm Polar snowfields. In 2008 - 12, we deployed Yeti to conduct autonomous ground-penetrating radar (GPR) surveys to detect hidden crevasses to help establish safe routes for overland resupply of research stations at South Pole, Antarctica, and Summit, Greenland. We also deployed Yeti with GPR at South Pole in 2011 to identify the locations of potentially hazardous buried buildings from the original 1950's-era station. Autonomous surveys remove personnel from safety risks posed during manual GPR surveys by undetected crevasses or buried buildings. Furthermore, autonomous surveys can yield higher quality and more comprehensive data than manual ones: Yeti's low ground pressure (20 kPa) allows it to cross thinly bridged crevasses or other voids without interrupting a survey, and well-defined survey grids allow repeated detection of buried voids to improve detection reliability and map their extent. To improve survey efficiency, we have automated the mapping of detected hazards, currently identified via post-survey manual review of the GPR data. Additionally, we are developing machine-learning algorithms to detect crevasses autonomously in real time, with reliability potentially higher than manual real-time detection. These algorithms will enable the rover to relay crevasse locations to a base station for near real-time mapping and decision-making. We deployed Cool Robot at Summit Station in 2005 to verify its mobility and power budget over Polar snowfields. Using solar power, this zero-emissions rover could travel more than 500 km per week during Polar summers and provide 100 - 200 W to power instrument payloads to help investigate the atmosphere, magnetosphere, glaciology and sub-glacial geology in Antarctica and Greenland. We are currently upgrading Cool Robot's navigation and solar-power systems and will deploy it during 2013 to map the emissions footprint around Summit Station to demonstrate its potential to execute long-endurance Polar science campaigns. These rovers could assist science traverses to chart safe routes into the interior of Antarctica and Greenland or conduct autonomous, remote science campaigns to extend spatial and temporal coverage for data collection. Our goals include 1,000 - 2,000-km summertime traverses of Antarctica and Greenland, safe navigation through 0.5-m amplitude sastrugi fields, survival in blizzards, and rover-network adaptation to research events of opportunity. We are seeking Polar scientists interested in autonomous, mobile data collection and can adapt the rovers to meet their requirements.
GPS Enabled Semi-Autonomous Robot
2017-09-01
equal and the goal has not yet been reached (i.e., any time the robot has reached a local minimum), and direct the robot to travel in a specific...whether the robot was turning or not. The challenge is overcome by ensuring the robot travels at its maximum speed at all times . Further research into...robot’s fixed reference frame was recalculated each time through the control loop. If the encoder data allows for the robot to appear to have travelled
Collaborative autonomous sensing with Bayesians in the loop
NASA Astrophysics Data System (ADS)
Ahmed, Nisar
2016-10-01
There is a strong push to develop intelligent unmanned autonomy that complements human reasoning for applications as diverse as wilderness search and rescue, military surveillance, and robotic space exploration. More than just replacing humans for `dull, dirty and dangerous' work, autonomous agents are expected to cope with a whole host of uncertainties while working closely together with humans in new situations. The robotics revolution firmly established the primacy of Bayesian algorithms for tackling challenging perception, learning and decision-making problems. Since the next frontier of autonomy demands the ability to gather information across stretches of time and space that are beyond the reach of a single autonomous agent, the next generation of Bayesian algorithms must capitalize on opportunities to draw upon the sensing and perception abilities of humans-in/on-the-loop. This work summarizes our recent research toward harnessing `human sensors' for information gathering tasks. The basic idea behind is to allow human end users (i.e. non-experts in robotics, statistics, machine learning, etc.) to directly `talk to' the information fusion engine and perceptual processes aboard any autonomous agent. Our approach is grounded in rigorous Bayesian modeling and fusion of flexible semantic information derived from user-friendly interfaces, such as natural language chat and locative hand-drawn sketches. This naturally enables `plug and play' human sensing with existing probabilistic algorithms for planning and perception, and has been successfully demonstrated with human-robot teams in target localization applications.
Manipulator control and mechanization: A telerobot subsystem
NASA Technical Reports Server (NTRS)
Hayati, S.; Wilcox, B.
1987-01-01
The short- and long-term autonomous robot control activities in the Robotics and Teleoperators Research Group at the Jet Propulsion Laboratory (JPL) are described. This group is one of several involved in robotics and is an integral part of a new NASA robotics initiative called Telerobot program. A description of the architecture, hardware and software, and the research direction in manipulator control is given.
ERIC Educational Resources Information Center
Silva, E.; Almeida, J.; Martins, A.; Baptista, J. P.; Campos Neves, B.
2013-01-01
Robotics research in Portugal is increasing every year, but few students embrace it as one of their first choices for study. Until recently, job offers for engineers were plentiful, and those looking for a degree in science and technology would avoid areas considered to be demanding, like robotics. At the undergraduate level, robotics programs are…
A neural network-based exploratory learning and motor planning system for co-robots
Galbraith, Byron V.; Guenther, Frank H.; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or “learning by doing,” an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object. PMID:26257640
A neural network-based exploratory learning and motor planning system for co-robots.
Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.
Sensory Motor Coordination in Robonaut
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II
2003-01-01
As a participant of the year 2000 NASA Summer Faculty Fellowship Program, I worked with the engineers of the Dexterous Robotics Laboratory at NASA Johnson Space Center on the Robonaut project. The Robonaut is an articulated torso with two dexterous arms, left and right five-fingered hands, and a head with cameras mounted on an articulated neck. This advanced space robot, now driven only teleoperatively using VR gloves, sensors and helmets, is to be upgraded to a thinking system that can find, interact with and assist humans autonomously, allowing the Crew to work with Robonaut as a (junior) member of their team. Thus, the work performed this summer was toward the goal of enabling Robonaut to operate autonomously as an intelligent assistant to astronauts. Our underlying hypothesis is that a robot can develop intelligence if it learns a set of basic behaviors (i.e., reflexes - actions tightly coupled to sensing) and through experience learns how to sequence these to solve problems or to accomplish higher-level tasks. We describe our approach to the automatic acquisition of basic behaviors as learning sensory-motor coordination (SMC). Although research in the ontogenesis of animals development from the time of conception) supports the approach of learning SMC as the foundation for intelligent, autonomous behavior, we do not know whether it will prove viable for the development of autonomy in robots. The first step in testing the hypothesis is to determine if SMC can be learned by the robot. To do this, we have taken advantage of Robonaut's teleoperated control system. When a person teleoperates Robonaut, the person's own SMC causes the robot to act purposefully. If the sensory signals that the robot detects during teleoperation are recorded over several repetitions of the same task, it should be possible through signal analysis to identify the sensory-motor couplings that accompany purposeful motion. In this report, reasons for suspecting SMC as the basis for intelligent behavior will be reviewed. A robot control system for autonomous behavior that uses learned SMC will be proposed. Techniques for the extraction of salient parameters from sensory and motor data will be discussed. Experiments with Robonaut will be discussed and preliminary data presented.
Control Architecture for Robotic Agent Command and Sensing
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel
2008-01-01
Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).
Brief Report: Development of a Robotic Intervention Platform for Young Children with ASD
ERIC Educational Resources Information Center
Warren, Zachary; Zheng, Zhi; Das, Shuvajit; Young, Eric M.; Swanson, Amy; Weitlauf, Amy; Sarkar, Nilanjan
2015-01-01
Increasingly researchers are attempting to develop robotic technologies for children with autism spectrum disorder (ASD). This pilot study investigated the development and application of a novel robotic system capable of dynamic, adaptive, and autonomous interaction during imitation tasks with embedded real-time performance evaluation and…
Introduction to Autonomous Mobile Robotics Using "Lego Mindstorms" NXT
ERIC Educational Resources Information Center
Akin, H. Levent; Meriçli, Çetin; Meriçli, Tekin
2013-01-01
Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the…
Robot Contest as a Laboratory for Experiential Engineering Education
ERIC Educational Resources Information Center
Verner, Igor M.; Ahlgren, David J.
2004-01-01
By designing, building, and operating autonomous robots students learn key engineering subjects and develop systems-thinking, problem-solving, and teamwork skills. Such events as the Trinity College Fire-Fighting Home Robot Contest (TCFFHRC) offer rich opportunities for students to apply their skills by requiring design, and implementation of…
Multi-Robot Assembly Strategies and Metrics.
Marvel, Jeremy A; Bostelman, Roger; Falco, Joe
2018-02-01
We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.
Multi-Robot Assembly Strategies and Metrics
MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE
2018-01-01
We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234
Autonomous Exploration Using an Information Gain Metric
2016-03-01
implemented on 2 different robotic platforms: the PackBot designed by iRobot and the Jackal designed by Clearpath Robotics. The PackBot, shown in Fig. 1, is a... Jackal is a wheeled, man-portable robot system. Both robots were equipped with a Hokuyo UTM-30LX-EW scanning laser range finder with a motor...Fig. 2, the robot was used to explore and map the second floor of a building located in a military and rescue training facility. The Jackal platform
Autonomous navigation method for substation inspection robot based on travelling deviation
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting
2017-06-01
A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.
Autonomous exploration and mapping of unknown environments
NASA Astrophysics Data System (ADS)
Owens, Jason; Osteen, Phil; Fields, MaryAnne
2012-06-01
Autonomous exploration and mapping is a vital capability for future robotic systems expected to function in arbitrary complex environments. In this paper, we describe an end-to-end robotic solution for remotely mapping buildings. For a typical mapping system, an unmanned system is directed to enter an unknown building at a distance, sense the internal structure, and, barring additional tasks, while in situ, create a 2-D map of the building. This map provides a useful and intuitive representation of the environment for the remote operator. We have integrated a robust mapping and exploration system utilizing laser range scanners and RGB-D cameras, and we demonstrate an exploration and metacognition algorithm on a robotic platform. The algorithm allows the robot to safely navigate the building, explore the interior, report significant features to the operator, and generate a consistent map - all while maintaining localization.
Single-Command Approach and Instrument Placement by a Robot on a Target
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Cheng, Yang
2005-01-01
AUTOAPPROACH is a computer program that enables a mobile robot to approach a target autonomously, starting from a distance of as much as 10 m, in response to a single command. AUTOAPPROACH is used in conjunction with (1) software that analyzes images acquired by stereoscopic cameras aboard the robot and (2) navigation and path-planning software that utilizes odometer readings along with the output of the image-analysis software. Intended originally for application to an instrumented, wheeled robot (rover) in scientific exploration of Mars, AUTOAPPROACH could be adapted to terrestrial applications, notably including the robotic removal of land mines and other unexploded ordnance. A human operator generates the approach command by selecting the target in images acquired by the robot cameras. The approach path consists of multiple legs. Feature points are derived from images that contain the target and are thereafter tracked to correct odometric errors and iteratively refine estimates of the position and orientation of the robot relative to the target on successive legs. The approach is terminated when the robot attains the position and orientation required for placing a scientific instrument at the target. The workspace of the robot arm is then autonomously checked for self/terrain collisions prior to the deployment of the scientific instrument onto the target.
SAURON: The Wallace Observatory Small AUtonomous Robotic Optical Nightwatcher
NASA Astrophysics Data System (ADS)
Kosiarek, M.; Mansfield, M.; Brothers, T.; Bates, H.; Aviles, R.; Brode-Roger, O.; Person, M.; Russel, M.
2017-07-01
The Small AUtonomous Robotic Optical Nightwatcher (SAURON) is an autonomous telescope consisting of an 11-inch Celestron Nexstar telescope on a SoftwareBisque Paramount ME II in a Technical Innovations ProDome located at the MIT George R. Wallace, Jr. Astrophysical Observatory. This paper describes the construction of the telescope system and its first light data on T-And0-15785, an eclipsing binary star. The out-of-eclipse R magnitude of T-And0-15785 was found to be 13.3258 ± 0.0015 R magnitude, and the magnitude changes for the primary and secondary eclipses were found to be 0.7145 ± 0.0515 and 0.6085 ± 0.0165 R magnitudes, respectively.
The Rise of Robots: The Military’s Use of Autonomous Lethal Force
2015-02-17
AIR WAR COLLEGE AIR UNIVERSITY THE RISE OF ROBOTS: THE MILITARY’S USE OF AUTONOMOUS LETHAL FORCE by Christopher J. Spinelli, Lt Col...ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Air War ...Christopher J. Spinelli is currently an Air War College student and was the former Commander of the 445th Flight Test Squadron at Edwards Air Force Base
Why the United States Must Adopt Lethal Autonomous Weapon Systems
2017-05-25
2017. http://www.designboom.com/ technology /designboom-tech-predictions-robotics-12-26- 2016/. Egan, Matt. "Robots Write Thousands Of News Stories A...views on the morality of artificial intelligence (AI) and robotics technology . Eastern culture sees artificial intelligence as an economic savior...Army, 37 pages. The East and West have differing views on the morality of artificial intelligence (AI) and robotics technology . Eastern culture
Context recognition and situation assessment in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Yavnai, Arie
1993-05-01
The capability to recognize the operating context and to assess the situation in real-time is needed, if a high functionality autonomous mobile robot has to react properly and effectively to continuously changing situations and events, either external or internal, while the robot is performing its assigned tasks. A new approach and architecture for context recognition and situation assessment module (CORSA) is presented in this paper. CORSA is a multi-level information processing module which consists of adaptive decision and classification algorithms. It performs dynamic mapping from the data space to the context space, and dynamically decides on the context class. Learning mechanism is employed to update the decision variables so as to minimize the probability of misclassification. CORSA is embedded within the Mission Manager module of the intelligent autonomous hyper-controller (IAHC) of the mobile robot. The information regarding operating context, events and situation is then communicated to other modules of the IAHC where it is used to: (a) select the appropriate action strategy; (b) support the processes to arbitration and conflict resolution between reflexive behaviors and reasoning-driven behaviors; (c) predict future events and situations; and (d) determine criteria and priorities for planning, replanning, and decision making.
Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)
NASA Technical Reports Server (NTRS)
Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey
1990-01-01
Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.
High level intelligent control of telerobotics systems
NASA Technical Reports Server (NTRS)
Mckee, James
1988-01-01
A high level robot command language is proposed for the autonomous mode of an advanced telerobotics system and a predictive display mechanism for the teleoperational model. It is believed that any such system will involve some mixture of these two modes, since, although artificial intelligence can facilitate significant autonomy, a system that can resort to teleoperation will always have the advantage. The high level command language will allow humans to give the robot instructions in a very natural manner. The robot will then analyze these instructions to infer meaning so that is can translate the task into lower level executable primitives. If, however, the robot is unable to perform the task autonomously, it will switch to the teleoperational mode. The time delay between control movement and actual robot movement has always been a problem in teleoperations. The remote operator may not actually see (via a monitor) the results of high actions for several seconds. A computer generated predictive display system is proposed whereby the operator can see a real-time model of the robot's environment and the delayed video picture on the monitor at the same time.
Self-evaluation on Motion Adaptation for Service Robots
NASA Astrophysics Data System (ADS)
Funabora, Yuki; Yano, Yoshikazu; Doki, Shinji; Okuma, Shigeru
We suggest self motion evaluation method to adapt to environmental changes for service robots. Several motions such as walking, dancing, demonstration and so on are described with time series patterns. These motions are optimized with the architecture of the robot and under certain surrounding environment. Under unknown operating environment, robots cannot accomplish their tasks. We propose autonomous motion generation techniques based on heuristic search with histories of internal sensor values. New motion patterns are explored under unknown operating environment based on self-evaluation. Robot has some prepared motions which realize the tasks under the designed environment. Internal sensor values observed under the designed environment with prepared motions show the interaction results with the environment. Self-evaluation is composed of difference of internal sensor values between designed environment and unknown operating environment. Proposed method modifies the motions to synchronize the interaction results on both environment. New motion patterns are generated to maximize self-evaluation function without external information, such as run length, global position of robot, human observation and so on. Experimental results show that the possibility to adapt autonomously patterned motions to environmental changes.
1996-10-01
systems currently headed for deployment ( BIDS is highlighted in the chart) to widely dispersed microsensors on micro, autonomous platforms. Small room... Small , Rapidly Deployable Forces" Joe Polito, Dan Rondeau, Sandia National Laboratory V.2. "Robotic Concepts for Small Rapidly Deployable Forces" V-7...Robert Palmquist, Jill Fahrenholtz, Richard Wheeler, Sandia National Laboratory V.3. "Potential for Distributed Ground Sensors in Support of Small Unit V
Human guidance of mobile robots in complex 3D environments using smart glasses
NASA Astrophysics Data System (ADS)
Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel
2016-05-01
In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.
NASA Astrophysics Data System (ADS)
Butail, Sachit; Polverino, Giovanni; Phamduy, Paul; Del Sette, Fausto; Porfiri, Maurizio
2014-03-01
We explore fish-robot interactions in a comprehensive set of experiments designed to highlight the effects of speed and configuration of bioinspired robots on live zebrafish. The robot design and movement is inspired by salient features of attraction in zebrafish and includes enhanced coloration, aspect ratio of a fertile female, and carangiform/subcarangiformlocomotion. The robots are autonomously controlled to swim in circular trajectories in the presence of live fish. Our results indicate that robot configuration significantly affects both the fish distance to the robots and the time spent near them.
Adaptive Perception for Autonomous Vehicles
1994-05-02
AD-A282 780 Adaptive Perception for Autonomous Vehicles Alonzo Kelly CMU-RI-TR-94-1 8 D IC SELECTF - pUG01 j ~G U The Robotics Inttt ’I 94-2i...way of doing range image perception. Adaptive Perception for Autonomous Vehicles page 1. LZ Commentary The throughput problem of autonomous navigaticn...idea will be applied to the problem of terrain mapping in outdoor rough terrain. Adaptive Perception for Autonomous Vehicles page 2. 2. Analytical
Combining environment-driven adaptation and task-driven optimisation in evolutionary robotics.
Haasdijk, Evert; Bredeche, Nicolas; Eiben, A E
2014-01-01
Embodied evolutionary robotics is a sub-field of evolutionary robotics that employs evolutionary algorithms on the robotic hardware itself, during the operational period, i.e., in an on-line fashion. This enables robotic systems that continuously adapt, and are therefore capable of (re-)adjusting themselves to previously unknown or dynamically changing conditions autonomously, without human oversight. This paper addresses one of the major challenges that such systems face, viz. that the robots must satisfy two sets of requirements. Firstly, they must continue to operate reliably in their environment (viability), and secondly they must competently perform user-specified tasks (usefulness). The solution we propose exploits the fact that evolutionary methods have two basic selection mechanisms-survivor selection and parent selection. This allows evolution to tackle the two sets of requirements separately: survivor selection is driven by the environment and parent selection is based on task-performance. This idea is elaborated in the Multi-Objective aNd open-Ended Evolution (monee) framework, which we experimentally validate. Experiments with robotic swarms of 100 simulated e-pucks show that monee does indeed promote task-driven behaviour without compromising environmental adaptation. We also investigate an extension of the parent selection process with a 'market mechanism' that can ensure equitable distribution of effort over multiple tasks, a particularly pressing issue if the environment promotes specialisation in single tasks.
Map generation in unknown environments by AUKF-SLAM using line segment-type and point-type landmarks
NASA Astrophysics Data System (ADS)
Nishihta, Sho; Maeyama, Shoichi; Watanebe, Keigo
2018-02-01
Recently, autonomous mobile robots that collect information at disaster sites are being developed. Since it is difficult to obtain maps in advance in disaster sites, the robots being capable of autonomous movement under unknown environments are required. For this objective, the robots have to build maps, as well as the estimation of self-location. This is called a SLAM problem. In particular, AUKF-SLAM which uses corners in the environment as point-type landmarks has been developed as a solution method so far. However, when the robots move in an environment like a corridor consisting of few point-type features, the accuracy of self-location estimated by the landmark is decreased and it causes some distortions in the map. In this research, we propose AUKF-SLAM which uses walls in the environment as a line segment-type landmark. We demonstrate that the robot can generate maps in unknown environment by AUKF-SLAM, using line segment-type and point-type landmarks.
Autonomous space processor for orbital debris
NASA Technical Reports Server (NTRS)
Ramohalli, Kumar; Marine, Micky; Colvin, James; Crockett, Richard; Sword, Lee; Putz, Jennifer; Woelfle, Sheri
1991-01-01
The development of an Autonomous Space Processor for Orbital Debris (ASPOD) was the goal. The nature of this craft, which will process, in situ, orbital debris using resources available in low Earth orbit (LEO) is explained. The serious problem of orbital debris is briefly described and the nature of the large debris population is outlined. The focus was on the development of a versatile robotic manipulator to augment an existing robotic arm, the incorporation of remote operation of the robotic arms, and the formulation of optimal (time and energy) trajectory planning algorithms for coordinated robotic arms. The mechanical design of the new arm is described in detail. The work envelope is explained showing the flexibility of the new design. Several telemetry communication systems are described which will enable the remote operation of the robotic arms. The trajectory planning algorithms are fully developed for both the time optimal and energy optimal problems. The time optimal problem is solved using phase plane techniques while the energy optimal problem is solved using dynamic programming.
On autonomous terrain model acquistion by a mobile robot
NASA Technical Reports Server (NTRS)
Rao, N. S. V.; Iyengar, S. S.; Weisbin, C. R.
1987-01-01
The following problem is considered: A point robot is placed in a terrain populated by an unknown number of polyhedral obstacles of varied sizes and locations in two/three dimensions. The robot is equipped with a sensor capable of detecting all the obstacle vertices and edges that are visible from the present location of the robot. The robot is required to autonomously navigate and build the complete terrain model using the sensor information. It is established that the necessary number of scanning operations needed for complete terrain model acquisition by any algorithm that is based on scan from vertices strategy is given by the summation of i = 1 (sup n) N(O sub i)-n and summation of i = 1 (sup n) N(O sub i)-2n in two- and three-dimensional terrains respectively, where O = (O sub 1, O sub 2,....O sub n) set of the obstacles in the terrain, and N(O sub i) is the number of vertices of the obstacle O sub i.
Intelligent mobility research for robotic locomotion in complex terrain
NASA Astrophysics Data System (ADS)
Trentini, Michael; Beckman, Blake; Digney, Bruce; Vincent, Isabelle; Ricard, Benoit
2006-05-01
The objective of the Autonomous Intelligent Systems Section of Defence R&D Canada - Suffield is best described by its mission statement, which is "to augment soldiers and combat systems by developing and demonstrating practical, cost effective, autonomous intelligent systems capable of completing military missions in complex operating environments." The mobility requirement for ground-based mobile systems operating in urban settings must increase significantly if robotic technology is to augment human efforts in these roles and environments. The intelligence required for autonomous systems to operate in complex environments demands advances in many fields of robotics. This has resulted in large bodies of research in areas of perception, world representation, and navigation, but the problem of locomotion in complex terrain has largely been ignored. In order to achieve its objective, the Autonomous Intelligent Systems Section is pursuing research that explores the use of intelligent mobility algorithms designed to improve robot mobility. Intelligent mobility uses sensing, control, and learning algorithms to extract measured variables from the world, control vehicle dynamics, and learn by experience. These algorithms seek to exploit available world representations of the environment and the inherent dexterity of the robot to allow the vehicle to interact with its surroundings and produce locomotion in complex terrain. The primary focus of the paper is to present the intelligent mobility research within the framework of the research methodology, plan and direction defined at Defence R&D Canada - Suffield. It discusses the progress and future direction of intelligent mobility research and presents the research tools, topics, and plans to address this critical research gap. This research will create effective intelligence to improve the mobility of ground-based mobile systems operating in urban settings to assist the Canadian Forces in their future urban operations.
Immune systems are not just for making you feel better: they are for controlling autonomous robots
NASA Astrophysics Data System (ADS)
Rosenblum, Mark
2005-05-01
The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.
2006-07-01
mobility in complex terrain, robot system designers are still seeking workable processes for mapbuilding, with enduring problems that either require...human) robot system designers /users can seek to control the consequences of robot actions, deliberate or otherwise. A notable particular application...operators a sufficient feeling of presence; if not, robot system designers will have to provide autonomy to the robot to make up for the gaps in human input
NASA Astrophysics Data System (ADS)
Fornas, D.; Sales, J.; Peñalver, A.; Pérez, J.; Fernández, J. J.; Marín, R.; Sanz, P. J.
2016-03-01
This article presents research on the subject of autonomous underwater robot manipulation. Ongoing research in underwater robotics intends to increase the autonomy of intervention operations that require physical interaction in order to achieve social benefits in fields such as archaeology or biology that cannot afford the expenses of costly underwater operations using remote operated vehicles. Autonomous grasping is still a very challenging skill, especially in underwater environments, with highly unstructured scenarios, limited availability of sensors and adverse conditions that affect the robot perception and control systems. To tackle these issues, we propose the use of vision and segmentation techniques that aim to improve the specification of grasping operations on underwater primitive shaped objects. Several sources of stereo information are used to gather 3D information in order to obtain a model of the object. Using a RANSAC segmentation algorithm, the model parameters are estimated and a set of feasible grasps are computed. This approach is validated in both simulated and real underwater scenarios.
Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-02-24
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.
Bengochea-Guevara, José M.; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-01-01
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them. PMID:26927102
Autonomous Lawnmower using FPGA implementation.
NASA Astrophysics Data System (ADS)
Ahmad, Nabihah; Lokman, Nabill bin; Helmy Abd Wahab, Mohd
2016-11-01
Nowadays, there are various types of robot have been invented for multiple purposes. The robots have the special characteristic that surpass the human ability and could operate in extreme environment which human cannot endure. In this paper, an autonomous robot is built to imitate the characteristic of a human cutting grass. A Field Programmable Gate Array (FPGA) is used to control the movements where all data and information would be processed. Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) is used to describe the hardware using Quartus II software. This robot has the ability of avoiding obstacle using ultrasonic sensor. This robot used two DC motors for its movement. It could include moving forward, backward, and turning left and right. The movement or the path of the automatic lawn mower is based on a path planning technique. Four Global Positioning System (GPS) plot are set to create a boundary. This to ensure that the lawn mower operates within the area given by user. Every action of the lawn mower is controlled by the FPGA DE' Board Cyclone II with the help of the sensor. Furthermore, Sketch Up software was used to design the structure of the lawn mower. The autonomous lawn mower was able to operate efficiently and smoothly return to coordinated paths after passing the obstacle. It uses 25% of total pins available on the board and 31% of total Digital Signal Processing (DSP) blocks.
Amador-Angulo, Leticia; Mendoza, Olivia; Castro, Juan R.; Rodríguez-Díaz, Antonio; Melin, Patricia; Castillo, Oscar
2016-01-01
A hybrid approach composed by different types of fuzzy systems, such as the Type-1 Fuzzy Logic System (T1FLS), Interval Type-2 Fuzzy Logic System (IT2FLS) and Generalized Type-2 Fuzzy Logic System (GT2FLS) for the dynamic adaptation of the alpha and beta parameters of a Bee Colony Optimization (BCO) algorithm is presented. The objective of the work is to focus on the BCO technique to find the optimal distribution of the membership functions in the design of fuzzy controllers. We use BCO specifically for tuning membership functions of the fuzzy controller for trajectory stability in an autonomous mobile robot. We add two types of perturbations in the model for the Generalized Type-2 Fuzzy Logic System to better analyze its behavior under uncertainty and this shows better results when compared to the original BCO. We implemented various performance indices; ITAE, IAE, ISE, ITSE, RMSE and MSE to measure the performance of the controller. The experimental results show better performances using GT2FLS then by IT2FLS and T1FLS in the dynamic adaptation the parameters for the BCO algorithm. PMID:27618062
NASA Technical Reports Server (NTRS)
Pisaich, Gregory; Flueckiger, Lorenzo; Neukom, Christian; Wagner, Mike; Buchanan, Eric; Plice, Laura
2007-01-01
The Mission Simulation Toolkit (MST) is a flexible software system for autonomy research. It was developed as part of the Mission Simulation Facility (MSF) project that was started in 2001 to facilitate the development of autonomous planetary robotic missions. Autonomy is a key enabling factor for robotic exploration. There has been a large gap between autonomy software (at the research level), and software that is ready for insertion into near-term space missions. The MST bridges this gap by providing a simulation framework and a suite of tools for supporting research and maturation of autonomy. MST uses a distributed framework based on the High Level Architecture (HLA) standard. A key feature of the MST framework is the ability to plug in new models to replace existing ones with the same services. This enables significant simulation flexibility, particularly the mixing and control of fidelity level. In addition, the MST provides automatic code generation from robot interfaces defined with the Unified Modeling Language (UML), methods for maintaining synchronization across distributed simulation systems, XML-based robot description, and an environment server. Finally, the MSF supports a number of third-party products including dynamic models and terrain databases. Although the communication objects and some of the simulation components that are provided with this toolkit are specifically designed for terrestrial surface rovers, the MST can be applied to any other domain, such as aerial, aquatic, or space.
Autonomous propulsion of nanorods trapped in an acoustic field
NASA Astrophysics Data System (ADS)
Sader, John; Collis, Jesse; Chakraborty, Debadi
2017-11-01
Recent measurements demonstrate that nanorods trapped in acoustic fields generate autonomous propulsion, with their direction and speed controlled by both the particle's shape and density distribution. In this talk, we investigate the physical mechanisms underlying this combined density/shape induced phenomenon by developing a simple yet rigorous mathematical framework for arbitrary axisymmetric particles. This only requires solution of the (linear) unsteady Stokes equations. Geometric and density asymmetries in the particle generate axial jets that can produce motion in either direction. Strikingly, the propulsion direction is found to reverse with increasing frequency, an effect that is yet to be reported experimentally. The general theory and mechanism described here enable the a priori design and fabrication of nano-motors in fluid for transport of small-scale payloads and robotic applications.
A Multi-Robot Sense-Act Approach to Lead to a Proper Acting in Environmental Incidents
Conesa-Muñoz, Jesús; Valente, João; del Cerro, Jaime; Barrientos, Antonio; Ribeiro, Angela
2016-01-01
Many environmental incidents affect large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial vehicles, may help address this issue due to their suitability to reach and easily cover large areas. Thus, unmanned aerial vehicles may be used to inspect the terrain and make a first assessment of the affected areas; however, nowadays they do not have the capability to act. On the other hand, ground vehicles rely on enough power to perform the intervention but exhibit more mobility constraints. This paper proposes a multi-robot sense-act system, composed of aerial and ground vehicles. This combination allows performing autonomous tasks in large outdoor areas by integrating both types of platforms in a fully automated manner. Aerial units are used to easily obtain relevant data from the environment and ground units use this information to carry out interventions more efficiently. This paper describes the platforms and sensors required by this multi-robot sense-act system as well as proposes a software system to automatically handle the workflow for any generic environmental task. The proposed system has proved to be suitable to reduce the amount of herbicide applied in agricultural treatments. Although herbicides are very polluting, they are massively deployed on complete agricultural fields to remove weeds. Nevertheless, the amount of herbicide required for treatment is radically reduced when it is accurately applied on patches by the proposed multi-robot system. Thus, the aerial units were employed to scout the crop and build an accurate weed distribution map which was subsequently used to plan the task of the ground units. The whole workflow was executed in a fully autonomous way, without human intervention except when required by Spanish law due to safety reasons. PMID:27517934
Essential Kinematics for Autonomous Vehicles
1994-05-02
AD-.A282 456 Essential Kinematics for Autonomous Vehicles Alonzo Kelly DTICCMU-RI-TR-94- 14 AU 031994 F The Robotics Institute Carnegie Mellon...kit of concepts and techniques that will equip the reader to master a large class of kinematic modelling problems. Control of autonomous vehicles in 3D...transformation from system ’a’ to system b’. Essential Kinematics for Autonomous Vehicles page 1. The specification of derivatives will be necessarily
Oudeyer, Pierre-Yves
2017-01-01
Autonomous lifelong development and learning are fundamental capabilities of humans, differentiating them from current deep learning systems. However, other branches of artificial intelligence have designed crucial ingredients towards autonomous learning: curiosity and intrinsic motivation, social learning and natural interaction with peers, and embodiment. These mechanisms guide exploration and autonomous choice of goals, and integrating them with deep learning opens stimulating perspectives.
Adaptive Control for Autonomous Navigation of Mobile Robots Considering Time Delay and Uncertainty
NASA Astrophysics Data System (ADS)
Armah, Stephen Kofi
Autonomous control of mobile robots has attracted considerable attention of researchers in the areas of robotics and autonomous systems during the past decades. One of the goals in the field of mobile robotics is development of platforms that robustly operate in given, partially unknown, or unpredictable environments and offer desired services to humans. Autonomous mobile robots need to be equipped with effective, robust and/or adaptive, navigation control systems. In spite of enormous reported work on autonomous navigation control systems for mobile robots, achieving the goal above is still an open problem. Robustness and reliability of the controlled system can always be improved. The fundamental issues affecting the stability of the control systems include the undesired nonlinear effects introduced by actuator saturation, time delay in the controlled system, and uncertainty in the model. This research work develops robustly stabilizing control systems by investigating and addressing such nonlinear effects through analytical, simulations, and experiments. The control systems are designed to meet specified transient and steady-state specifications. The systems used for this research are ground (Dr Robot X80SV) and aerial (Parrot AR.Drone 2.0) mobile robots. Firstly, an effective autonomous navigation control system is developed for X80SV using logic control by combining 'go-to-goal', 'avoid-obstacle', and 'follow-wall' controllers. A MATLAB robot simulator is developed to implement this control algorithm and experiments are conducted in a typical office environment. The next stage of the research develops an autonomous position (x, y, and z) and attitude (roll, pitch, and yaw) controllers for a quadrotor, and PD-feedback control is used to achieve stabilization. The quadrotor's nonlinear dynamics and kinematics are implemented using MATLAB S-function to generate the state output. Secondly, the white-box and black-box approaches are used to obtain a linearized second-order altitude models for the quadrotor, AR.Drone 2.0. Proportional (P), pole placement or proportional plus velocity (PV), linear quadratic regulator (LQR), and model reference adaptive control (MRAC) controllers are designed and validated through simulations using MATLAB/Simulink. Control input saturation and time delay in the controlled systems are also studied. MATLAB graphical user interface (GUI) and Simulink programs are developed to implement the controllers on the drone. Thirdly, the time delay in the drone's control system is estimated using analytical and experimental methods. In the experimental approach, the transient properties of the experimental altitude responses are compared to those of simulated responses. The analytical approach makes use of the Lambert W function to obtain analytical solutions of scalar first-order delay differential equations (DDEs). A time-delayed P-feedback control system (retarded type) is used in estimating the time delay. Then an improved system performance is obtained by incorporating the estimated time delay in the design of the PV control system (neutral type) and PV-MRAC control system. Furthermore, the stability of a parametric perturbed linear time-invariant (LTI) retarded-type system is studied. This is done by analytically calculating the stability radius of the system. Simulation of the control system is conducted to confirm the stability. This robust control design and uncertainty analysis are conducted for first-order and second-order quadrotor models. Lastly, the robustly designed PV and PV-MRAC control systems are used to autonomously track multiple waypoints. Also, the robustness of the PV-MRAC controller is tested against a baseline PV controller using the payload capability of the drone. It is shown that the PV-MRAC offers several benefits over the fixed-gain approach of the PV controller. The adaptive control is found to offer enhanced robustness to the payload fluctuations.
Technologies for Human Exploration
NASA Technical Reports Server (NTRS)
Drake, Bret G.
2014-01-01
Access to Space, Chemical Propulsion, Advanced Propulsion, In-Situ Resource Utilization, Entry, Descent, Landing and Ascent, Humans and Robots Working Together, Autonomous Operations, In-Flight Maintenance, Exploration Mobility, Power Generation, Life Support, Space Suits, Microgravity Countermeasures, Autonomous Medicine, Environmental Control.
The magic glove: a gesture-based remote controller for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark
2012-01-01
This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.
Bilevel shared control for teleoperators
NASA Technical Reports Server (NTRS)
Hayati, Samad A. (Inventor); Venkataraman, Subramanian T. (Inventor)
1992-01-01
A shared system is disclosed for robot control including integration of the human and autonomous input modalities for an improved control. Autonomously planned motion trajectories are modified by a teleoperator to track unmodelled target motions, while nominal teleoperator motions are modified through compliance to accommodate geometric errors autonomously in the latter. A hierarchical shared system intelligently shares control over a remote robot between the autonomous and teleoperative portions of an overall control system. Architecture is hierarchical, and consists of two levels. The top level represents the task level, while the bottom, the execution level. In space applications, the performance of pure teleoperation systems depend significantly on the communication time delays between the local and the remote sites. Selection/mixing matrices are provided with entries which reflect how each input's signals modality is weighted. The shared control minimizes the detrimental effects caused by these time delays between earth and space.
Adaptive artificial neural network for autonomous robot control
NASA Technical Reports Server (NTRS)
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
1992-01-01
The topics are presented in viewgraph form and include: neural network controller for robot arm positioning with visual feedback; initial training of the arm; automatic recovery from cumulative fault scenarios; and error reduction by iterative fine movements.
An update on Lab Rover: A hospital material transporter
NASA Technical Reports Server (NTRS)
Mattaboni, Paul
1994-01-01
The development of a hospital material transporter, 'Lab Rover', is described. Conventional material transport now utilizes people power, push carts, pneumatic tubes and tracked vehicles. Hospitals are faced with enormous pressure to reduce operating costs. Cyberotics, Inc. developed an Autonomous Intelligent Vehicle (AIV). This battery operated service robot was designed specifically for health care institutions. Applications for the AIV include distribution of clinical lab samples, pharmacy drugs, administrative records, x-ray distribution, meal tray delivery, and certain emergency room applications. The first AIV was installed at Lahey Clinic in Burlington, Mass. Lab Rover was beta tested for one year and has been 'on line' for an additional 2 years.
Integrating autonomous distributed control into a human-centric C4ISR environment
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2017-05-01
This paper considers incorporating autonomy into human-centric Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) environments. Specifically, it focuses on identifying ways that current autonomy technologies can augment human control and the challenges presented by additive autonomy. Three approaches to this challenge are considered, stemming from prior work in two converging areas. In the first, the problem is approached as augmenting what humans currently do with automation. In the alternate approach, the problem is approached as treating humans as actors within a cyber-physical system-of-systems (stemming from robotic distributed computing). A third approach, combines elements of both of the aforementioned.
Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor
NASA Astrophysics Data System (ADS)
Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick
This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.
NASA Astrophysics Data System (ADS)
Dong, Gangqi; Zhu, Z. H.
2016-04-01
This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.
I want what you've got: Cross platform portabiity and human-robot interaction assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer
2005-08-01
Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot developmentmore » environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.« less
Application of ant colony algorithm in path planning of the data center room robot
NASA Astrophysics Data System (ADS)
Wang, Yong; Ma, Jianming; Wang, Ying
2017-05-01
According to the Internet Data Center (IDC) room patrol robot as the background, the robot in the search path of autonomous obstacle avoidance and path planning ability, worked out in advance of the robot room patrol mission. The simulation experimental results show that the improved ant colony algorithm for IDC room patrol robot obstacle avoidance planning, makes the robot along an optimal or suboptimal and safe obstacle avoidance path to reach the target point to complete the task. To prove the feasibility of the method.
Task-level robot programming: Integral part of evolution from teleoperation to autonomy
NASA Technical Reports Server (NTRS)
Reynolds, James C.
1987-01-01
An explanation is presented of task-level robot programming and of how it differs from the usual interpretation of task planning for robotics. Most importantly, it is argued that the physical and mathematical basis of task-level robot programming provides inherently greater reliability than efforts to apply better known concepts from artificial intelligence (AI) to autonomous robotics. Finally, an architecture is presented that allows the integration of task-level robot programming within an evolutionary, redundant, and multi-modal framework that spans teleoperation to autonomy.
Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.
Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco
2009-01-01
This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.
A multimodal interface for real-time soldier-robot teaming
NASA Astrophysics Data System (ADS)
Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.
2016-05-01
Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.
A power-autonomous self-rolling wheel using ionic and capacitive actuators
NASA Astrophysics Data System (ADS)
Must, Indrek; Kaasik, Toomas; Baranova, Inna; Johanson, Urmas; Punning, Andres; Aabloo, Alvo
2015-04-01
Ionic electroactive polymer (IEAP) laminates are often considered as perspective actuator technology for mobile robotic appliances; however, only a few real proof-of-concept-stage robots have been built previously, a majority of which are dependent on an off-board power supply. In this work, a power-autonomous robot, propelled by four IEAP actuators having carbonaceous electrodes, is constructed. The robot consists of a light outer section in the form of a hollow cylinder, and a heavy inner section, referred to as the rim and the hub, respectively. The hub is connected to the rim using IEAP actuators, which form `spokes' of variable length. The effective length of the spokes is changed via charging and discharging of the capacitive IEAP actuators and a change in the effective lengths of the spokes eventuate in a rolling motion of the robot. The constructed IEAP robot takes advantage of the distinctive properties of the IEAP actuators. The IEAP actuators transform the geometry of the whole robot, while being soft and compliant. The low-voltage IEAP actuators in the robot are powered directly from an embedded single-cell lithium-ion battery, with no voltage regulation required; instead, only the input current is regulated. The charging of the actuators is commuted correspondingly to the robot's transitory position using an on-board control electronics. The constructed robot is able to roll for an extended period on a smooth surface. The locomotion of the IEAP robot is analyzed using video recognition.
NASA Astrophysics Data System (ADS)
Ososky, Scott; Sanders, Tracy; Jentsch, Florian; Hancock, Peter; Chen, Jessie Y. C.
2014-06-01
Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator's ability to understand a robot's behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans' trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system's action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans' impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human-robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human-robot communication and human mental models regarding robots impact a human's ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.
A Mobile Robot for Small Object Handling
NASA Astrophysics Data System (ADS)
Fišer, Ondřej; Szűcsová, Hana; Grimmer, Vladimír; Popelka, Jan; Vonásek, Vojtěch; Krajník, Tomáš; Chudoba, Jan
The aim of this paper is to present an intelligent autonomous robot capable of small object manipulation. The design of the robot is influenced mainly by the rules of EUROBOT 09 competition. In this challenge, two robots pick up objects scattered on a planar rectangular playfield and use these elements to build models of Hellenistic temples. This paper describes the robot hardware, i.e. electro-mechanics of the drive, chassis and manipulator, as well as the software, i.e. localization, collision avoidance, motion control and planning algorithms.
On the role of emotion in biological and robotic autonomy.
Ziemke, Tom
2008-02-01
This paper reviews some of the differences between notions of biological and robotic autonomy, and how these differences have been reflected in discussions of embodiment, grounding and other concepts in AI and autonomous robotics. Furthermore, the relations between homeostasis, emotion and embodied cognition are discussed as well as recent proposals to model their interplay in robots, which reflects a commitment to a multi-tiered affectively/emotionally embodied view of mind that takes organismic embodiment more serious than usually done in biologically inspired robotics.
NASA Astrophysics Data System (ADS)
Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar
2017-02-01
In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.
RoBlock: a prototype autonomous manufacturing cell
NASA Astrophysics Data System (ADS)
Baekdal, Lars K.; Balslev, Ivar; Eriksen, Rene D.; Jensen, Soren P.; Jorgensen, Bo N.; Kirstein, Brian; Kristensen, Bent B.; Olsen, Martin M.; Perram, John W.; Petersen, Henrik G.; Petersen, Morten L.; Ruhoff, Peter T.; Skjolstrup, Carl E.; Sorensen, Anders S.; Wagenaar, Jeroen M.
2000-10-01
RoBlock is the first phase of an internally financed project at the Institute aimed at building a system in which two industrial robots suspended from a gantry, as shown below, cooperate to perform a task specified by an external user, in this case, assembling an unstructured collection of colored wooden blocks into a specified 3D pattern. The blocks are identified and localized using computer vision and grasped with a suction cup mechanism. Future phases of the project will involve other processes such as grasping and lifting, as well as other types of robot such as autonomous vehicles or variable geometry trusses. Innovative features of the control software system include: The use of an advanced trajectory planning system which ensures collision avoidance based on a generalization of the method of artificial potential fields, the use of a generic model-based controller which learns the values of parameters, including static and kinetic friction, of a detailed mechanical model of itself by comparing actual with planned movements, the use of fast, flexible, and robust pattern recognition and 3D-interpretation strategies, integration of trajectory planning and control with the sensor systems in a distributed Java application running on a network of PC's attached to the individual physical components. In designing this first stage, the aim was to build in the minimum complexity necessary to make the system non-trivially autonomous and to minimize the technological risks. The aims of this project, which is planned to be operational during 2000, are as follows: To provide a platform for carrying out experimental research in multi-agent systems and autonomous manufacturing systems, to test the interdisciplinary cooperation architecture of the Maersk Institute, in which researchers in the fields of applied mathematics (modeling the physical world), software engineering (modeling the system) and sensor/actuator technology (relating the virtual and real worlds) could collaborate with systems integrators to construct intelligent, autonomous systems, and to provide a showpiece demonstrator in the entrance hall of the Institute's new building.
SyRoTek--Distance Teaching of Mobile Robotics
ERIC Educational Resources Information Center
Kulich, M.; Chudoba, J.; Kosnar, K.; Krajnik, T.; Faigl, J.; Preucil, L.
2013-01-01
E-learning is a modern and effective approach for training in various areas and at different levels of education. This paper gives an overview of SyRoTek, an e-learning platform for mobile robotics, artificial intelligence, control engineering, and related domains. SyRoTek provides remote access to a set of fully autonomous mobile robots placed in…
Remote Control and Children's Understanding of Robots
ERIC Educational Resources Information Center
Somanader, Mark C.; Saylor, Megan M.; Levin, Daniel T.
2011-01-01
Children use goal-directed motion to classify agents as living things from early in infancy. In the current study, we asked whether preschoolers are flexible in their application of this criterion by introducing them to robots that engaged in goal-directed motion. In one case the robot appeared to move fully autonomously, and in the other case it…
Autonomous mobile robot for radiologic surveys
Dudar, A.M.; Wagner, D.G.; Teese, G.D.
1994-06-28
An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures.
Autonomous mobile robot for radiologic surveys
Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.
1994-01-01
An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.
Developing Autonomous Vehicles That Learn to Navigate by Mimicking Human Behavior
2006-09-28
navigate in an unstructured environment to a specific target or location. 15. SUBJECT TERMS autonomous vehicles , fuzzy logic, learning behavior...ANSI-Std Z39-18 Developing Autonomous Vehicles That Learn to Navigate by Mimicking Human Behavior FINAL REPORT 9/28/2006 Dean B. Edwards Department...the future, as greater numbers of autonomous vehicles are employed, it is hoped that lower LONG-TERM GOALS Use LAGR (Learning Applied to Ground Robots
Recursive Gradient Estimation Using Splines for Navigation of Autonomous Vehicles.
1985-07-01
AUTONOMOUS VEHICLES C. N. SHEN DTIC " JULY 1985 SEP 1 219 85 V US ARMY ARMAMENT RESEARCH AND DEVELOPMENT CENTER LARGE CALIBER WEAPON SYSTEMS LABORATORY I...GRADIENT ESTIMATION USING SPLINES FOR NAVIGATION OF AUTONOMOUS VEHICLES Final S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(q) 8. CONTRACT OR GRANT NUMBER...which require autonomous vehicles . Essential to these robotic vehicles is an adequate and efficient computer vision system. A potentially more
Mission-directed path planning for planetary rover exploration
NASA Astrophysics Data System (ADS)
Tompkins, Paul
2005-07-01
Robotic rovers uniquely benefit planetary exploration---they enable regional exploration with the precision of in-situ measurements, a combination impossible from an orbiting spacecraft or fixed lander. Mission planning for planetary rover exploration currently utilizes sophisticated software for activity planning and scheduling, but simplified path planning and execution approaches tailored for localized operations to individual targets. This approach is insufficient for the investigation of multiple, regionally distributed targets in a single command cycle. Path planning tailored for this task must consider the impact of large scale terrain on power, speed and regional access; the effect of route timing on resource availability; the limitations of finite resource capacity and other operational constraints on vehicle range and timing; and the mutual influence between traverses and upstream and downstream stationary activities. Encapsulating this reasoning in an efficient autonomous planner would allow a rover to continue operating rationally despite significant deviations from an initial plan. This research presents mission-directed path planning that enables an autonomous, strategic reasoning capability for robotic explorers. Planning operates in a space of position, time and energy. Unlike previous hierarchical approaches, it treats these dimensions simultaneously to enable globally-optimal solutions. The approach calls on a near incremental search algorithm designed for planning and re-planning under global constraints, in spaces of higher than two dimensions. Solutions under this method specify routes that avoid terrain obstacles, optimize the collection and use of rechargable energy, satisfy local and global mission constraints, and account for the time and energy of interleaved mission activities. Furthermore, the approach efficiently re-plans in response to updates in vehicle state and world models, and is well suited to online operation aboard a robot. Simulations exhibit that the new methodology succeeds where conventional path planners would fail. Three planetary-relevant field experiments demonstrate the power of mission-directed path planning in directing actual exploration robots. Offline mission-directed planning sustained a solar-powered rover in a 24-hour sun-synchronous traverse. Online planning and re-planning enabled full navigational autonomy of over 1 kilometer, and supported the execution of science activities distributed over hundreds of meters.
Autonomous In-Situ Resources Prospector
NASA Technical Reports Server (NTRS)
Dissly, R. W.; Buehler, M. G.; Schaap, M. G.; Nicks, D.; Taylor, G. J.; Castano, R.; Suarez, D.
2004-01-01
This presentation will describe the concept of an autonomous, intelligent, rover-based rapid surveying system to identify and map several key lunar resources to optimize their ISRU (In Situ Resource Utilization) extraction potential. Prior to an extraction phase for any target resource, ground-based surveys are needed to provide confirmation of remote observation, to quantify and map their 3-D distribution, and to locate optimal extraction sites (e.g. ore bodies) with precision to maximize their economic benefit. The system will search for and quantify optimal minerals for oxygen production feedstock, water ice, and high glass-content regolith that can be used for building materials. These are targeted because of their utility and because they are, or are likely to be, variable in quantity over spatial scales accessible to a rover (i.e., few km). Oxygen has benefits for life support systems and as an oxidizer for propellants. Water is a key resource for sustainable exploration, with utility for life support, propellants, and other industrial processes. High glass-content regolith has utility as a feedstock for building materials as it readily sinters upon heating into a cohesive matrix more readily than other regolith materials or crystalline basalts. Lunar glasses are also a potential feedstock for oxygen production, as many are rich in iron and titanium oxides that are optimal for oxygen extraction. To accomplish this task, a system of sensors and decision-making algorithms for an autonomous prospecting rover is described. One set of sensors will be located in the wheel tread of the robotic search vehicle providing contact sensor data on regolith composition. Another set of instruments will be housed on the platform of the rover, including VIS-NIR imagers and spectrometers, both for far-field context and near-field characterization of the regolith in the immediate vicinity of the rover. Also included in the sensor suite are a neutron spectrometer, ground-penetrating radar, and an instrumented cone penetrometer for subsurface assessment. Output from these sensors will be evaluated autonomously in real-time by decision-making software to evaluate if any of the targeted resources has been detected, and if so, to quantify their abundance. Algorithms for optimizing the mapping strategy based on target resource abundance and distribution are also included in the autonomous software. This approach emphasizes on-the-fly survey measurements to enable efficient and rapid prospecting of large areas, which will improve the economics of ISRU system approaches. The mature technology will enable autonomous rovers to create in-situ resource maps of lunar or other planetary surfaces, which will facilitate human and robotic exploration.
External force/velocity control for an autonomous rehabilitation robot
NASA Astrophysics Data System (ADS)
Saekow, Peerayuth; Neranon, Paramin; Smithmaitrie, Pruittikorn
2018-01-01
Stroke is a primary cause of death and the leading cause of permanent disability in adults. There are many stroke survivors, who live with a variety of levels of disability and always need rehabilitation activities on daily basis. Several studies have reported that usage of rehabilitation robotic devices shows the better improvement outcomes in upper-limb stroke patients than the conventional therapy-nurses or therapists actively help patients with exercise-based rehabilitation. This research focuses on the development of an autonomous robotic trainer designed to guide a stroke patient through an upper-limb rehabilitation task. The robotic device was designed and developed to automate the reaching exercise as mentioned. The designed robotic system is made up of a four-wheel omni-directional mobile robot, an ATI Gamma multi-axis force/torque sensor used to measure contact force and a microcontroller real-time operating system. Proportional plus Integral control was adapted to control the overall performance and stability of the autonomous assistive robot. External force control was successfully implemented to establish the behavioral control strategy for the robot force and velocity control scheme. In summary, the experimental results indicated satisfactorily stable performance of the robot force and velocity control can be considered acceptable. The gain tuning for proportional integral (PI) velocity control algorithms was suitably estimated using the Ziegler-Nichols method in which the optimized proportional and integral gains are 0.45 and 0.11, respectively. Additionally, the PI external force control gains were experimentally tuned using the trial and error method based on a set of experiments which allow a human participant moves the robot along the constrained circular path whilst attempting to minimize the radial force. The performance was analyzed based on the root mean square error (E_RMS) of the radial forces, in which the lower the variation in radial forces, the better the performance of the system. The outstanding performance of the tests as specified by the E_RMS of the radial force was observed with proportional and integral gains of Kp = 0.7 and Ki = 0.75, respectively.
Fernandez-Leon, Jose A; Acosta, Gerardo G; Rozenfeld, Alejandro
2014-10-01
Researchers in diverse fields, such as in neuroscience, systems biology and autonomous robotics, have been intrigued by the origin and mechanisms for biological robustness. Darwinian evolution, in general, has suggested that adaptive mechanisms as a way of reaching robustness, could evolve by natural selection acting successively on numerous heritable variations. However, is this understanding enough for realizing how biological systems remain robust during their interactions with the surroundings? Here, we describe selected studies of bio-inspired systems that show behavioral robustness. From neurorobotics, cognitive, self-organizing and artificial immune system perspectives, our discussions focus mainly on how robust behaviors evolve or emerge in these systems, having the capacity of interacting with their surroundings. These descriptions are twofold. Initially, we introduce examples from autonomous robotics to illustrate how the process of designing robust control can be idealized in complex environments for autonomous navigation in terrain and underwater vehicles. We also include descriptions of bio-inspired self-organizing systems. Then, we introduce other studies that contextualize experimental evolution with simulated organisms and physical robots to exemplify how the process of natural selection can lead to the evolution of robustness by means of adaptive behaviors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method
Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter
2015-01-01
An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254
Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter
NASA Technical Reports Server (NTRS)
Rock, Stephen M.
1999-01-01
This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.
Rotorcraft and Enabling Robotic Rescue
NASA Technical Reports Server (NTRS)
Young, Larry A.
2010-01-01
This paper examines some of the issues underlying potential robotic rescue devices (RRD) in the context where autonomous or manned rotorcraft deployment of such robotic systems is a crucial attribute for their success in supporting future disaster relief and emergency response (DRER) missions. As a part of this discussion, work related to proof-of-concept prototyping of two notional RRD systems is summarized.
ERIC Educational Resources Information Center
Levy, Sharona T.; Mioduser, David
2008-01-01
This study investigates young children's perspectives in explaining a self-regulating mobile robot, as they learn to program its behaviors from rules. We explore their descriptions of a robot in action to determine the nature of their explanatory frameworks: psychological or technological. We have also studied the role of an adult's intervention…
Bing, Zhenshan; Cheng, Long; Chen, Guang; Röhrbein, Florian; Huang, Kai; Knoll, Alois
2017-04-04
Snake-like robots with 3D locomotion ability have significant advantages of adaptive travelling in diverse complex terrain over traditional legged or wheeled mobile robots. Despite numerous developed gaits, these snake-like robots suffer from unsmooth gait transitions by changing the locomotion speed, direction, and body shape, which would potentially cause undesired movement and abnormal torque. Hence, there exists a knowledge gap for snake-like robots to achieve autonomous locomotion. To address this problem, this paper presents the smooth slithering gait transition control based on a lightweight central pattern generator (CPG) model for snake-like robots. First, based on the convergence behavior of the gradient system, a lightweight CPG model with fast computing time was designed and compared with other widely adopted CPG models. Then, by reshaping the body into a more stable geometry, the slithering gait was modified, and studied based on the proposed CPG model, including the gait transition of locomotion speed, moving direction, and body shape. In contrast to sinusoid-based method, extensive simulations and prototype experiments finally demonstrated that smooth slithering gait transition can be effectively achieved using the proposed CPG-based control method without generating undesired locomotion and abnormal torque.
NASA Center for Intelligent Robotic Systems for Space Exploration
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's program for the civilian exploration of space is a challenge to scientists and engineers to help maintain and further develop the United States' position of leadership in a focused sphere of space activity. Such an ambitious plan requires the contribution and further development of many scientific and technological fields. One research area essential for the success of these space exploration programs is Intelligent Robotic Systems. These systems represent a class of autonomous and semi-autonomous machines that can perform human-like functions with or without human interaction. They are fundamental for activities too hazardous for humans or too distant or complex for remote telemanipulation. To meet this challenge, Rensselaer Polytechnic Institute (RPI) has established an Engineering Research Center for Intelligent Robotic Systems for Space Exploration (CIRSSE). The Center was created with a five year $5.5 million grant from NASA submitted by a team of the Robotics and Automation Laboratories. The Robotics and Automation Laboratories of RPI are the result of the merger of the Robotics and Automation Laboratory of the Department of Electrical, Computer, and Systems Engineering (ECSE) and the Research Laboratory for Kinematics and Robotic Mechanisms of the Department of Mechanical Engineering, Aeronautical Engineering, and Mechanics (ME,AE,&M), in 1987. This report is an examination of the activities that are centered at CIRSSE.
Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks
NASA Technical Reports Server (NTRS)
Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia
2017-01-01
Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.
2014 NASA Centennial Challenges Sample Return Robot Challenge
2014-06-14
Members of team Mountaineers pose with officials from the 2014 NASA Centennial Challenges Sample Return Robot Challenge on Saturday, June 14, 2014 at Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Mountaineer was the only team to complete the level one challenge this year. Team Mountaineer members, from left (in blue shirts) are: Ryan Watson, Marvin Cheng, Scott Harper, Jarred Strader, Lucas Behrens, Yu Gu, Tanmay Mandal, Alexander Hypes, and Nick Ohi Challenge judges and competition staff (in white and green polo shirts) from left are: Sam Ortega, NASA Centennial Challenge program manager; Ken Stafford, challenge technical advisor, WPI; Colleen Shaver, challenge event manager, WPI. During the competition, teams were required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge was to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)
Geigle, Paula R; Frye, Sara Kate; Perreault, John; Scott, William H; Gorman, Peter H
2013-03-01
A 41-year-old man with a history of C6 American Spinal Injury Association (ASIA) Impairment Scale (AIS) C spinal cord injury (SCI), enrolled in an Institutional Review Board (IRB)-approved, robotic-assisted body weight-supported treadmill training (BWSTT), and aquatic exercise research protocol developed asymptomatic autonomic dysreflexia (AD) during training. Little information is available regarding the relationship of robotic-assisted BWSTT and AD. After successfully completing 36 sessions of aquatic exercise, he reported exertional fatigue during his 10th Lokomat intervention and exhibited asymptomatic or silent AD during this and the three subsequent BWSTT sessions. Standard facilitators of AD were assessed and no obvious irritant identified other than the actual physical exertion and positioning required during robotic-assisted BWSTT. Increased awareness of potential silent AD presenting during robotic assisted BWSTT training for individuals with motor incomplete SCI is required as in this case AD clinical signs were not concurrent with occurrence. Frequent vital sign assessment before, during, and at conclusion of each BWSTT session is strongly recommended.
A Feedforward Control Approach to the Local Navigation Problem for Autonomous Vehicles
1994-05-02
AD-A282 787 " A Feedforward Control Approach to the Local Navigation Problem for Autonomous Vehicles Alonzo Kelly CMU-RI-TR-94-17 The Robotics...follow, or a direction to prefer, it cannot generate its own strategic goals. Therefore, it solves the local planning problem for autonomous vehicles . The... autonomous vehicles . It is intelligent because it uses range images that are generated from either a laser rangefinder or a stereo triangulation
Current challenges in autonomous vehicle development
NASA Astrophysics Data System (ADS)
Connelly, J.; Hong, W. S.; Mahoney, R. B., Jr.; Sparrow, D. A.
2006-05-01
The field of autonomous vehicles is a rapidly growing one, with significant interest from both government and industry sectors. Autonomous vehicles represent the intersection of artificial intelligence (AI) and robotics, combining decision-making with real-time control. Autonomous vehicles are desired for use in search and rescue, urban reconnaissance, mine detonation, supply convoys, and more. The general adage is to use robots for anything dull, dirty, dangerous or dumb. While a great deal of research has been done on autonomous systems, there are only a handful of fielded examples incorporating machine autonomy beyond the level of teleoperation, especially in outdoor/complex environments. In an attempt to assess and understand the current state of the art in autonomous vehicle development, a few areas where unsolved problems remain became clear. This paper outlines those areas and provides suggestions for the focus of science and technology research. The first step in evaluating the current state of autonomous vehicle development was to develop a definition of autonomy. A number of autonomy level classification systems were reviewed. The resulting working definitions and classification schemes used by the authors are summarized in the opening sections of the paper. The remainder of the report discusses current approaches and challenges in decision-making and real-time control for autonomous vehicles. Suggested research focus areas for near-, mid-, and long-term development are also presented.
Design of a Vision-Based Sensor for Autonomous Pig House Cleaning
NASA Astrophysics Data System (ADS)
Braithwaite, Ian; Blanke, Mogens; Zhang, Guo-Qiang; Carstensen, Jens Michael
2005-12-01
Current pig house cleaning procedures are hazardous to the health of farm workers, and yet necessary if the spread of disease between batches of animals is to be satisfactorily controlled. Autonomous cleaning using robot technology offers salient benefits. This paper addresses the feasibility of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas with a low probability of misclassification. A Bayesian discriminator is shown to be efficient in this context and implementation of a prototype tool demonstrates the feasibility of designing a low-cost vision-based sensor for autonomous cleaning.
Ophiuroid robot that self-organizes periodic and non-periodic arm movements.
Kano, Takeshi; Suzuki, Shota; Watanabe, Wataru; Ishiguro, Akio
2012-09-01
Autonomous decentralized control is a key concept for understanding the mechanism underlying adaptive and versatile locomotion of animals. Although the design of an autonomous decentralized control system that ensures adaptability by using coupled oscillators has been proposed previously, it cannot comprehensively reproduce the versatility of animal behaviour. To tackle this problem, we focus on using ophiuroids as a simple model that exhibits versatile locomotion including periodic and non-periodic arm movements. Our existing model for ophiuroid locomotion uses an active rotator model that describes both oscillatory and excitatory properties. In this communication, we develop an ophiuroid robot to confirm the validity of this proposed model in the real world. We show that the robot travels by successfully coordinating periodic and non-periodic arm movements in response to external stimuli.
Development and training of a learning expert system in an autonomous mobile robot via simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Lyness, E.; DeSaussure, G.
1989-11-01
The Center for Engineering Systems Advanced Research (CESAR) conducts basic research in the area of intelligent machines. Recently at CESAR a learning expert system was created to operate on board an autonomous robot working at a process control panel. The authors discuss two-computer simulation system used to create, evaluate and train this learning system. The simulation system has a graphics display of the current status of the process being simulated, and the same program which does the simulating also drives the actual control panel. Simulation results were validated on the actual robot. The speed and safety values of using amore » computerized simulator to train a learning computer, and future uses of the simulation system, are discussed.« less
Navigation of robotic system using cricket motes
NASA Astrophysics Data System (ADS)
Patil, Yogendra J.; Baine, Nicholas A.; Rattan, Kuldip S.
2011-06-01
This paper presents a novel algorithm for self-mapping of the cricket motes that can be used for indoor navigation of autonomous robotic systems. The cricket system is a wireless sensor network that can provide indoor localization service to its user via acoustic ranging techniques. The behavior of the ultrasonic transducer on the cricket mote is studied and the regions where satisfactorily distance measurements can be obtained are recorded. Placing the motes in these regions results fine-grain mapping of the cricket motes. Trilateration is used to obtain a rigid coordinate system, but is insufficient if the network is to be used for navigation. A modified SLAM algorithm is applied to overcome the shortcomings of trilateration. Finally, the self-mapped cricket motes can be used for navigation of autonomous robotic systems in an indoor location.
An Analysis of Navigation Algorithms for Smartphones Using J2ME
NASA Astrophysics Data System (ADS)
Santos, André C.; Tarrataca, Luís; Cardoso, João M. P.
Embedded systems are considered one of the most potential areas for future innovations. Two embedded fields that will most certainly take a primary role in future innovations are mobile robotics and mobile computing. Mobile robots and smartphones are growing in number and functionalities, becoming a presence in our daily life. In this paper, we study the current feasibility of a smartphone to execute navigation algorithms. As a test case, we use a smartphone to control an autonomous mobile robot. We tested three navigation problems: Mapping, Localization and Path Planning. For each of these problems, an algorithm has been chosen, developed in J2ME, and tested on the field. Results show the current mobile Java capacity for executing computationally demanding algorithms and reveal the real possibility of using smartphones for autonomous navigation.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
NASA Technical Reports Server (NTRS)
Simmons, Reid; Apfelbaum, David
2005-01-01
Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.
Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments
2016-09-01
yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G
Strategy in the Robotic Age: A Case for Autonomous Warfare
2014-09-01
6. Robots and Robotics The term robot is a loaded word. For many people it conjures a vision of fictional characters from movies like The...released in the early 1930s to review the experiences of WWI, it was censored , and a version modified to maintain the institutional legacies was...apprehensive, and doctrine was non-existent. Today, America is emerging from two wars and subsequently a war-weary public. The United States is a
Trusted Remote Operation of Proximate Emergy Robots (TROOPER): DARPA Robotics Challenge
2015-12-01
sensor in each of the robot’s feet. Additionally, there is a 6-axis IMU that sits in the robot’s pelvis cage. While testing before the Finals, the...Services. Many of the controllers in the autonomic layer have overlapping requirements, such as filtered IMU and force torque data from the robot...the following services during the DRC: • IMU Filtering • Force Torque Filtering • Joint State Publishing • TF (Transform) Broadcasting • Robot Pose
Trusted Remote Operation of Proximate Emergency Robots (TROOPER): DARPA Robotics Challenge
2015-12-01
sensor in each of the robot’s feet. Additionally, there is a 6-axis IMU that sits in the robot’s pelvis cage. While testing before the Finals, the...Services. Many of the controllers in the autonomic layer have overlapping requirements, such as filtered IMU and force torque data from the robot...the following services during the DRC: • IMU Filtering • Force Torque Filtering • Joint State Publishing • TF (Transform) Broadcasting • Robot Pose
Design and implementation of a compliant robot with force feedback and strategy planning software
NASA Technical Reports Server (NTRS)
Premack, T.; Strempek, F. M.; Solis, L. A.; Brodd, S. S.; Cutler, E. P.; Purves, L. R.
1984-01-01
Force-feedback robotics techniques are being developed for automated precision assembly and servicing of NASA space flight equipment. Design and implementation of a prototype robot which provides compliance and monitors forces is in progress. Computer software to specify assembly steps and makes force feedback adjustments during assembly are coded and tested for three generically different precision mating problems. A model program demonstrates that a suitably autonomous robot can plan its own strategy.
Instruction dialogues: Teaching new skills to a robot
NASA Technical Reports Server (NTRS)
Crangle, Colleen; Suppes, P.
1989-01-01
Extended dialogues between a human user and a robot system are presented. The purpose of each dialogue is to teach the robot a new skill or to improve the performance of a skill it already has. The particular interest is in natural language dialogues but the illustrated techniques can be applied to any high level language. The primary purpose is to show how verbal instruction can be integrated with the robot's autonomous learning of a skill.
Optimal sensor fusion for land vehicle navigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, J.D.
1990-10-01
Position location is a fundamental requirement in autonomous mobile robots which record and subsequently follow x,y paths. The Dept. of Energy, Office of Safeguards and Security, Robotic Security Vehicle (RSV) program involves the development of an autonomous mobile robot for patrolling a structured exterior environment. A straight-forward method for autonomous path-following has been adopted and requires digitizing'' the desired road network by storing x,y coordinates every 2m along the roads. The position location system used to define the locations consists of a radio beacon system which triangulates position off two known transponders, and dead reckoning with compass and odometer. Thismore » paper addresses the problem of combining these two measurements to arrive at a best estimate of position. Two algorithms are proposed: the optimal'' algorithm treats the measurements as random variables and minimizes the estimate variance, while the average error'' algorithm considers the bias in dead reckoning and attempts to guarantee an average error. Data collected on the algorithms indicate that both work well in practice. 2 refs., 7 figs.« less
Development and demonstration of autonomous behaviors for urban environment exploration
NASA Astrophysics Data System (ADS)
Ahuja, Gaurav; Fellars, Donald; Kogut, Gregory; Pacis Rius, Estrellina; Schoolov, Misha; Xydes, Alexander
2012-06-01
Under the Urban Environment Exploration project, the Space and Naval Warfare Systems Center Pacic (SSC- PAC) is maturing technologies and sensor payloads that enable man-portable robots to operate autonomously within the challenging conditions of urban environments. Previously, SSC-PAC has demonstrated robotic capabilities to navigate and localize without GPS and map the ground oors of various building sizes.1 SSC-PAC has since extended those capabilities to localize and map multiple multi-story buildings within a specied area. To facilitate these capabilities, SSC-PAC developed technologies that enable the robot to detect stairs/stairwells, maintain localization across multiple environments (e.g. in a 3D world, on stairs, with/without GPS), visualize data in 3D, plan paths between any two points within the specied area, and avoid 3D obstacles. These technologies have been developed as independent behaviors under the Autonomous Capabilities Suite, a behavior architecture, and demonstrated at a MOUT site at Camp Pendleton. This paper describes the perceptions and behaviors used to produce these capabilities, as well as an example demonstration scenario.
Status of DoD Robotic Programs
1985-03-01
planning or adhere to previously planned routes. 0 Control. Controls are micro electronics based which provide means of autonomous action directly...KEY No: I 11 1181 1431 OROJECT Titloi ISMART TERRAIN ANALYSIS FOR ROBOTIC SYSTEMS (STARS) PROJECT Not I I CLASSIFICATION: IUCI TASK Titles IAUTOMATIC
Survivability design for a hybrid underwater vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Biao; Wu, Chao; Li, Xiang
A novel hybrid underwater robotic vehicle (HROV) capable of working to the full ocean depth has been developed. The battery powered vehicle operates in two modes: operate as an untethered autonomous vehicle in autonomous underwater vehicle (AUV) mode and operate under remote control connected to the surface vessel by a lightweight, fiber optic tether in remotely operated vehicle (ROV) mode. Considering the hazardous underwater environment at the limiting depth and the hybrid operating modes, survivability has been placed on an equal level with the other design attributes of the HROV since the beginning of the project. This paper reports themore » survivability design elements for the HROV including basic vehicle design of integrated navigation and integrated communication, emergency recovery strategy, distributed architecture, redundant bus, dual battery package, emergency jettison system and self-repairing control system.« less
Humanoid robot Lola: design and walking control.
Buschmann, Thomas; Lohmeier, Sebastian; Ulbrich, Heinz
2009-01-01
In this paper we present the humanoid robot LOLA, its mechatronic hardware design, simulation and real-time walking control. The goal of the LOLA-project is to build a machine capable of stable, autonomous, fast and human-like walking. LOLA is characterized by a redundant kinematic configuration with 7-DoF legs, an extremely lightweight design, joint actuators with brushless motors and an electronics architecture using decentralized joint control. Special emphasis was put on an improved mass distribution of the legs to achieve good dynamic performance. Trajectory generation and control aim at faster, more flexible and robust walking. Center of mass trajectories are calculated in real-time from footstep locations using quadratic programming and spline collocation methods. Stabilizing control uses hybrid position/force control in task space with an inner joint position control loop. Inertial stabilization is achieved by modifying the contact force trajectories.
Materials science. Materials that couple sensing, actuation, computation, and communication.
McEvoy, M A; Correll, N
2015-03-20
Tightly integrating sensing, actuation, and computation into composites could enable a new generation of truly smart material systems that can change their appearance and shape autonomously. Applications for such materials include airfoils that change their aerodynamic profile, vehicles with camouflage abilities, bridges that detect and repair damage, or robotic skins and prosthetics with a realistic sense of touch. Although integrating sensors and actuators into composites is becoming increasingly common, the opportunities afforded by embedded computation have only been marginally explored. Here, the key challenge is the gap between the continuous physics of materials and the discrete mathematics of computation. Bridging this gap requires a fundamental understanding of the constituents of such robotic materials and the distributed algorithms and controls that make these structures smart. Copyright © 2015, American Association for the Advancement of Science.
Interaction dynamics of multiple mobile robots with simple navigation strategies
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.
NASA Technical Reports Server (NTRS)
Heise, James; Hull, Bethanne J.; Bauer, Jonathan; Beougher, Nathan G.; Boe, Caleb; Canahui, Ricardo; Charles, John P.; Cooper, Zachary Davis Job; DeShaw, Mark A.; Fontanella, Luan Gasparetto;
2012-01-01
The Iowa State University team, Team LunaCY, is composed of the following sub-teams: the main student organization, the Lunabotics Club; a senior mechanical engineering design course, ME 415; a senior multidisciplinary design course, ENGR 466; and a senior design course from Wartburg College in Waverly, Iowa. Team LunaCY designed and fabricated ART-E III, Astra Robotic Tractor- Excavator the Third, for the team's third appearance in the NASA Lunabotic Mining competition. While designing ART-E III, the team had four main goals for this year's competition:to reduce the total weight of the robot, to increase the amount of regolith simulant mined, to reduce dust, and to make ART-E III autonomous. After many designs and research, a final robot design was chosen that obtained all four goals of Team LunaCY. A few changes Team LunaCY made this year was to go to the electrical, computer, and software engineering club fest at Iowa State University to recruit engineering students to accomplish the task of making ART-E III autonomous. Team LunaCY chose to use LabView to program the robot and various sensors were installed to measure the distance between the robot and the surroundings to allow ART-E III to maneuver autonomously. Team LunaCY also built a testing arena to test prototypes and ART-E III in. To best replicate the competition arena at the Kennedy Space Center, a regolith simulant was made from sand, QuickCrete, and fly ash to cover the floor of the arena. Team LunaCY also installed fans to allow ventilation in the arena and used proper safety attire when working in the arena . With the additional practice in the testing arena and innovative robot design, Team LunaCY expects to make a strong appearance at the 2012 NASA Lunabotic Mining Competition. .
Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L
2016-03-18
Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
NASA Astrophysics Data System (ADS)
Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan
2016-05-01
With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.
Human-like robots for space and hazardous environments
NASA Technical Reports Server (NTRS)
1994-01-01
The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.
Human-like robots for space and hazardous environments
NASA Astrophysics Data System (ADS)
The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.
Women Warriors: Why the Robotics Revolution Changes the Combat Equation
2015-01-01
combat fight due in large part to advances in robotics and autonomous systems. From exoskeletons to robotic mules, technology is reducing the...kick-started innovation in this area in 2001 by funding labs, industry, and universities under the Exoskeletons for Human Performance Augmentation...and fledgling programs of record. The Human Load Carrier (HULC), for example, is a hydraulic- powered exoskeleton made of titanium that allows
Maintaining Limited-Range Connectivity Among Second-Order Agents
2016-07-07
we consider ad-hoc networks of robotic agents with double integrator dynamics. For such networks, the connectivity maintenance problems are: (i) do...hoc networks of mobile autonomous agents. This loose ter- minology refers to groups of robotic agents with limited mobility and communica- tion...connectivity can be preserved. 3.1. Networks of robotic agents with second-order dynamics and the connectivity maintenance problem. We begin by
A trunk ranging system based on binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Xixuan; Kan, Jiangming
2017-07-01
Trunk ranging is an essential function for autonomous forestry robots. Traditional trunk ranging systems based on personal computers are not convenient in practical application. This paper examines the implementation of a trunk ranging system based on the binocular vision theory via TI's DaVinc DM37x system. The system is smaller and more reliable than that implemented using a personal computer. It calculates the three-dimensional information from the images acquired by binocular cameras, producing the targeting and ranging results. The experimental results show that the measurement error is small and the system design is feasible for autonomous forestry robots.
Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System
NASA Technical Reports Server (NTRS)
Braman, Julia M. B.; Murray, Richard M; Wagner, David A.
2007-01-01
Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.
Productive Information Foraging
NASA Technical Reports Server (NTRS)
Furlong, P. Michael; Dille, Michael
2016-01-01
This paper presents a new algorithm for autonomous on-line exploration in unknown environments. The objective of the algorithm is to free robot scientists from extensive preliminary site investigation while still being able to collect meaningful data. We simulate a common form of exploration task for an autonomous robot involving sampling the environment at various locations and compare performance with a simpler existing algorithm that is also denied global information. The result of the experiment shows that the new algorithm has a statistically significant improvement in performance with a significant effect size for a range of costs for taking sampling actions.
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion
NASA Astrophysics Data System (ADS)
Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger
2007-12-01
Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.
The Dawning of the Ethics of Environmental Robots.
van Wynsberghe, Aimee; Donhauser, Justin
2017-10-23
Environmental scientists and engineers have been exploring research and monitoring applications of robotics, as well as exploring ways of integrating robotics into ecosystems to aid in responses to accelerating environmental, climatic, and biodiversity changes. These emerging applications of robots and other autonomous technologies present novel ethical and practical challenges. Yet, the critical applications of robots for environmental research, engineering, protection and remediation have received next to no attention in the ethics of robotics literature to date. This paper seeks to fill that void, and promote the study of environmental robotics. It provides key resources for further critical examination of the issues environmental robots present by explaining and differentiating the sorts of environmental robotics that exist to date and identifying unique conceptual, ethical, and practical issues they present.
Towards Autonomous Inspection of Space Systems Using Mobile Robotic Sensor Platforms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Saad, Ashraf; Litt, Jonathan S.
2007-01-01
The space transportation systems required to support NASA's Exploration Initiative will demand a high degree of reliability to ensure mission success. This reliability can be realized through autonomous fault/damage detection and repair capabilities. It is crucial that such capabilities are incorporated into these systems since it will be impractical to rely upon Extra-Vehicular Activity (EVA), visual inspection or tele-operation due to the costly, labor-intensive and time-consuming nature of these methods. One approach to achieving this capability is through the use of an autonomous inspection system comprised of miniature mobile sensor platforms that will cooperatively perform high confidence inspection of space vehicles and habitats. This paper will discuss the efforts to develop a small scale demonstration test-bed to investigate the feasibility of using autonomous mobile sensor platforms to perform inspection operations. Progress will be discussed in technology areas including: the hardware implementation and demonstration of robotic sensor platforms, the implementation of a hardware test-bed facility, and the investigation of collaborative control algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
EISLER, G. RICHARD
This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Robust Planning for Autonomous Navigation of Mobile Robots In Unstructured, Dynamic Environments (AutoNav)''. The project goal was to develop an algorithmic-driven, multi-spectral approach to point-to-point navigation characterized by: segmented on-board trajectory planning, self-contained operation without human support for mission duration, and the development of appropriate sensors and algorithms to navigate unattended. The project was partially successful in achieving gains in sensing, path planning, navigation, and guidance. One of three experimental platforms, the Minimalist Autonomous Testbed, used a repetitive sense-and-re-plan combination to demonstratemore » the majority of elements necessary for autonomous navigation. However, a critical goal for overall success in arbitrary terrain, that of developing a sensor that is able to distinguish true obstacles that need to be avoided as a function of vehicle scale, still needs substantial research to bring to fruition.« less
A Robot to Help Make the Rounds
NASA Technical Reports Server (NTRS)
2003-01-01
This paper presents a discussion on the Pyxis HelpMate SecurePak (SP) trackless robotic courier designed by Transitions Research Corporation, to navigate autonomously throughout medical facilities, transporting pharmaceuticals, laboratory specimens, equipment, supplies, meals, medical records, and radiology films between support departments and nursing floors.
Autonomous robotic platforms for locating radio sources buried under rubble
NASA Astrophysics Data System (ADS)
Tasu, A. S.; Anchidin, L.; Tamas, R.; Paun, M.; Danisor, A.; Petrescu, T.
2016-12-01
This paper deals with the use of autonomous robotic platforms able to locate radio signal sources such as mobile phones, buried under collapsed buildings as a result of earthquakes, natural disasters, terrorism, war, etc. This technique relies on averaging position data resulting from a propagation model implemented on the platform and the data acquired by robotic platforms at the disaster site. That allows us to calculate the approximate position of radio sources buried under the rubble. Based on measurements, a radio map of the disaster site is made, very useful for locating victims and for guiding specific rubble lifting machinery, by assuming that there is a victim next to a mobile device detected by the robotic platform; by knowing the approximate position, the lifting machinery does not risk to further hurt the victims. Moreover, by knowing the positions of the victims, the reaction time is decreased, and the chances of survival for the victims buried under the rubble, are obviously increased.
Self-organized adaptation of a simple neural circuit enables complex robot behaviour
NASA Astrophysics Data System (ADS)
Steingrube, Silke; Timme, Marc; Wörgötter, Florentin; Manoonpong, Poramate
2010-03-01
Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns (for example, orienting, taxis, self-protection and various gaits) and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom.
Free-standing leaping experiments with a power-autonomous elastic-spined quadruped
NASA Astrophysics Data System (ADS)
Pusey, Jason L.; Duperret, Jeffrey M.; Haynes, G. Clark; Knopf, Ryan; Koditschek, Daniel E.
2013-05-01
We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at capturing our robot's "body energy" during real time operation as a means of quantifying its potential for agile behavior. Finally, we present joint motor, inertial and motion capture data taken from Canid's initial leaps into highly energetic regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform for developing efficient, stable, hence useful bounding gaits.
Variants of guided self-organization for robot control.
Martius, Georg; Herrmann, J Michael
2012-09-01
Autonomous robots can generate exploratory behavior by self-organization of the sensorimotor loop. We show that the behavioral manifold that is covered in this way can be modified in a goal-dependent way without reducing the self-induced activity of the robot. We present three strategies for guided self-organization, namely by using external rewards, a problem-specific error function, or assumptions about the symmetries of the desired behavior. The strategies are analyzed for two different robots in a physically realistic simulation.
Human-like robots for space and hazardous environments
NASA Technical Reports Server (NTRS)
Cogley, Allen; Gustafson, David; White, Warren; Dyer, Ruth; Hampton, Tom (Editor); Freise, Jon (Editor)
1990-01-01
The three year goal for this NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of rough terrain crossing, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation and path planning skills. These goals came from the concept that the robot should have the abilities of both a planetary rover and a hazardous waste site scout.
Human-like robots for space and hazardous environments
NASA Astrophysics Data System (ADS)
Cogley, Allen; Gustafson, David; White, Warren; Dyer, Ruth; Hampton, Tom; Freise, Jon
The three year goal for this NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of rough terrain crossing, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation and path planning skills. These goals came from the concept that the robot should have the abilities of both a planetary rover and a hazardous waste site scout.
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Robotic sampling system for an unmanned Mars mission
NASA Technical Reports Server (NTRS)
Chun, Wendell
1989-01-01
A major robotics opportunity for NASA will be the Mars Rover/Sample Return Mission which could be launched as early as the 1990s. The exploratory portion of this mission will include two autonomous subsystems: the rover vehicle and a sample handling system. The sample handling system is the key to the process of collecting Martian soils. This system could include a core drill, a general-purpose manipulator, tools, containers, a return canister, certification hardware and a labeling system. Integrated into a functional package, the sample handling system is analogous to a complex robotic workcell. Discussed here are the different components of the system, their interfaces, forseeable problem areas and many options based on the scientific goals of the mission. The various interfaces in the sample handling process (component to component and handling system to rover) will be a major engineering effort. Two critical evaluation criteria that will be imposed on the system are flexibility and reliability. It needs to be flexible enough to adapt to different scenarios and environments and acquire the most desirable specimens for return to Earth. Scientists may decide to change the distribution and ratio of core samples to rock samples in the canister. The long distance and duration of this planetary mission places a reliability burden on the hardware. The communication time delay between Earth and Mars minimizes operator interaction (teleoperation, supervisory modes) with the sample handler. An intelligent system will be required to plan the actions, make sample choices, interpret sensor inputs, and query unknown surroundings. A combination of autonomous functions and supervised movements will be integrated into the sample handling system.
Bilevel Shared Control Of A Remote Robotic Manipulator
NASA Technical Reports Server (NTRS)
Hayati, Samad A.; Venkataraman, Subramanian T.
1992-01-01
Proposed concept blends autonomous and teleoperator control modes, each overcoming deficiencies of the other. Both task-level and execution-level functions performed at local and remote sites. Applicable to systems with long communication delay between local and remote sites or systems intended to function partly autonomously.
Autonomous Shepherding Behaviors of Multiple Target Steering Robots.
Lee, Wonki; Kim, DaeEun
2017-11-25
This paper presents a distributed coordination methodology for multi-robot systems, based on nearest-neighbor interactions. Among many interesting tasks that may be performed using swarm robots, we propose a biologically-inspired control law for a shepherding task, whereby a group of external agents drives another group of agents to a desired location. First, we generated sheep-like robots that act like a flock. We assume that each agent is capable of measuring the relative location and velocity to each of its neighbors within a limited sensing area. Then, we designed a control strategy for shepherd-like robots that have information regarding where to go and a steering ability to control the flock, according to the robots' position relative to the flock. We define several independent behavior rules; each agent calculates to what extent it will move by summarizing each rule. The flocking sheep agents detect the steering agents and try to avoid them; this tendency leads to movement of the flock. Each steering agent only needs to focus on guiding the nearest flocking agent to the desired location. Without centralized coordination, multiple steering agents produce an arc formation to control the flock effectively. In addition, we propose a new rule for collecting behavior, whereby a scattered flock or multiple flocks are consolidated. From simulation results with multiple robots, we show that each robot performs actions for the shepherding behavior, and only a few steering agents are needed to control the whole flock. The results are displayed in maps that trace the paths of the flock and steering robots. Performance is evaluated via time cost and path accuracy to demonstrate the effectiveness of this approach.
Modeling and Classifying Six-Dimensional Trajectories for Teleoperation Under a Time Delay
NASA Technical Reports Server (NTRS)
SunSpiral, Vytas; Wheeler, Kevin R.; Allan, Mark B.; Martin, Rodney
2006-01-01
Within the context of teleoperating the JSC Robonaut humanoid robot under 2-10 second time delays, this paper explores the technical problem of modeling and classifying human motions represented as six-dimensional (position and orientation) trajectories. A dual path research agenda is reviewed which explored both deterministic approaches and stochastic approaches using Hidden Markov Models. Finally, recent results are shown from a new model which represents the fusion of these two research paths. Questions are also raised about the possibility of automatically generating autonomous actions by reusing the same predictive models of human behavior to be the source of autonomous control. This approach changes the role of teleoperation from being a stand-in for autonomy into the first data collection step for developing generative models capable of autonomous control of the robot.
Millimeter-scale MEMS enabled autonomous systems: system feasibility and mobility
NASA Astrophysics Data System (ADS)
Pulskamp, Jeffrey S.
2012-06-01
Millimeter-scale robotic systems based on highly integrated microelectronics and micro-electromechanical systems (MEMS) could offer unique benefits and attributes for small-scale autonomous systems. This extreme scale for robotics will naturally constrain the realizable system capabilities significantly. This paper assesses the feasibility of developing such systems by defining the fundamental design trade spaces between component design variables and system level performance parameters. This permits the development of mobility enabling component technologies within a system relevant context. Feasible ranges of system mass, required aerodynamic power, available battery power, load supported power, flight endurance, and required leg load bearing capability are presented for millimeter-scale platforms. The analysis illustrates the feasibility of developing both flight capable and ground mobile millimeter-scale autonomous systems while highlighting the significant challenges that must be overcome to realize their potential.
Stochastic receding horizon control: application to an octopedal robot
NASA Astrophysics Data System (ADS)
Shah, Shridhar K.; Tanner, Herbert G.
2013-06-01
Miniature autonomous systems are being developed under ARL's Micro Autonomous Systems and Technology (MAST). These systems can only be fitted with a small-size processor, and their motion behavior is inherently uncertain due to manufacturing and platform-ground interactions. One way to capture this uncertainty is through a stochastic model. This paper deals with stochastic motion control design and implementation for MAST- specific eight-legged miniature crawling robots, which have been kinematically modeled as systems exhibiting the behavior of a Dubin's car with stochastic noise. The control design takes the form of stochastic receding horizon control, and is implemented on a Gumstix Overo Fire COM with 720 MHz processor and 512 MB RAM, weighing 5.5 g. The experimental results show the effectiveness of this control law for miniature autonomous systems perturbed by stochastic noise.
ERIC Educational Resources Information Center
Doty, Keith L.
1999-01-01
Research on neural networks and hippocampal function demonstrating how mammals construct mental maps and develop navigation strategies is being used to create Intelligent Autonomous Mobile Robots (IAMRs). Such robots are able to recognize landmarks and navigate without "vision." (SK)