Perception for mobile robot navigation: A survey of the state of the art
NASA Technical Reports Server (NTRS)
Kortenkamp, David
1994-01-01
In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.
From Autonomous Robots to Artificial Ecosystems
NASA Astrophysics Data System (ADS)
Mastrogiovanni, Fulvio; Sgorbissa, Antonio; Zaccaria, Renato
During the past few years, starting from the two mainstream fields of Ambient Intelligence [2] and Robotics [17], several authors recognized the benefits of the socalled Ubiquitous Robotics paradigm. According to this perspective, mobile robots are no longer autonomous, physically situated and embodied entities adapting themselves to a world taliored for humans: on the contrary, they are able to interact with devices distributed throughout the environment and get across heterogeneous information by means of communication technologies. Information exchange, coupled with simple actuation capabilities, is meant to replace physical interaction between robots and their environment. Two benefits are evident: (i) smart environments overcome inherent limitations of mobile platforms, whereas (ii) mobile robots offer a mobility dimension unknown to smart environments.
Web Environment for Programming and Control of a Mobile Robot in a Remote Laboratory
ERIC Educational Resources Information Center
dos Santos Lopes, Maísa Soares; Gomes, Iago Pacheco; Trindade, Roque M. P.; da Silva, Alzira F.; de C. Lima, Antonio C.
2017-01-01
Remote robotics laboratories have been successfully used for engineering education. However, few of them use mobile robots to to teach computer science. This article describes a mobile robot Control and Programming Environment (CPE) and its pedagogical applications. The system comprises a remote laboratory for robotics, an online programming tool,…
Meeting the challenges of installing a mobile robotic system
NASA Technical Reports Server (NTRS)
Decorte, Celeste
1994-01-01
The challenges of integrating a mobile robotic system into an application environment are many. Most problems inherent to installing the mobile robotic system fall into one of three categories: (1) the physical environment - location(s) where, and conditions under which, the mobile robotic system will work; (2) the technological environment - external equipment with which the mobile robotic system will interact; and (3) the human environment - personnel who will operate and interact with the mobile robotic system. The successful integration of a mobile robotic system into these three types of application environment requires more than a good pair of pliers. The tools for this job include: careful planning, accurate measurement data (as-built drawings), complete technical data of systems to be interfaced, sufficient time and attention of key personnel for training on how to operate and program the robot, on-site access during installation, and a thorough understanding and appreciation - by all concerned - of the mobile robotic system's role in the security mission at the site, as well as the machine's capabilities and limitations. Patience, luck, and a sense of humor are also useful tools to keep handy during a mobile robotic system installation. This paper will discuss some specific examples of problems in each of three categories, and explore approaches to solving these problems. The discussion will draw from the author's experience with on-site installations of mobile robotic systems in various applications. Most of the information discussed in this paper has come directly from knowledge learned during installations of Cybermotion's SR2 security robots. A large part of the discussion will apply to any vehicle with a drive system, collision avoidance, and navigation sensors, which is, of course, what makes a vehicle autonomous. And it is with these sensors and a drive system that the installer must become familiar in order to foresee potential trouble areas in the physical, technical, and human environment.
Control of wheeled mobile robot in restricted environment
NASA Astrophysics Data System (ADS)
Ali, Mohammed A. H.; En, Chang Yong
2018-03-01
This paper presents a simulation and practical control system for wheeled mobile robot in restricted environment. A wheeled mobile robot with 3 wheels is fabricated and controlled by proportional derivative active force control (PD-AFC) to move in a pre-planned restricted environment to maintain the tracking errors at zero level. A control system with two loops, outer by PD controller and inner loop by Active Force Control, are designed to control the wheeled mobile robot. Fuzzy logic controller is implemented in the Active force Control to estimate the inertia matrix that will be used to calculate the actual torque applied on the wheeled mobile robot. The mobile robot is tested in two different trajectories, namely are circular and straight path. The actual path and desired path are compared.
Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan
2017-01-01
In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964
Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan
2017-09-10
In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.
Long-Term Simultaneous Localization and Mapping in Dynamic Environments
2015-01-01
core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the...and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot???s sensory...distributed stochastic neighbor embedding x ABSTRACT One of the core competencies required for autonomous mobile robotics is the ability to use sensors
SLAM algorithm applied to robotics assistance for navigation in unknown environments.
Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo
2010-02-17
The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.
Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques
NASA Astrophysics Data System (ADS)
Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.
1999-08-01
A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.
NASA Astrophysics Data System (ADS)
Murata, Naoya; Katsura, Seiichiro
Acquisition of information about the environment around a mobile robot is important for purposes such as controlling the robot from a remote location and in situations such as that when the robot is running autonomously. In many researches, audiovisual information is used. However, acquisition of information about force sensation, which is included in environmental information, has not been well researched. The mobile-hapto, which is a remote control system with force information, has been proposed, but the robot used for the system can acquire only the horizontal component of forces. For this reason, in this research, a three-wheeled mobile robot that consists of seven actuators was developed and its control system was constructed. It can get information on horizontal and vertical forces without using force sensors. By using this robot, detailed information on the forces in the environment can be acquired and the operability of the robot and its capability to adjust to the environment are expected to improve.
A hardware/software environment to support R D in intelligent machines and mobile robotic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.
1990-01-01
The Center for Engineering Systems Advanced Research (CESAR) serves as a focal point at the Oak Ridge National Laboratory (ORNL) for basic and applied research in intelligent machines. R D at CESAR addresses issues related to autonomous systems, unstructured (i.e. incompletely known) operational environments, and multiple performing agents. Two mobile robot prototypes (HERMIES-IIB and HERMIES-III) are being used to test new developments in several robot component technologies. This paper briefly introduces the computing environment at CESAR which includes three hypercube concurrent computers (two on-board the mobile robots), a graphics workstation, VAX, and multiple VME-based systems (several on-board the mobile robots).more » The current software environment at CESAR is intended to satisfy several goals, e.g.: code portability, re-usability in different experimental scenarios, modularity, concurrent computer hardware transparent to applications programmer, future support for multiple mobile robots, support human-machine interface modules, and support for integration of software from other, geographically disparate laboratories with different hardware set-ups. 6 refs., 1 fig.« less
ARK: Autonomous mobile robot in an industrial environment
NASA Technical Reports Server (NTRS)
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
1994-01-01
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
SLAM algorithm applied to robotics assistance for navigation in unknown environments
2010-01-01
Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735
Payá, Luis; Reinoso, Oscar; Jiménez, Luis M; Juliá, Miguel
2017-01-01
Along the past years, mobile robots have proliferated both in domestic and in industrial environments to solve some tasks such as cleaning, assistance, or material transportation. One of their advantages is the ability to operate in wide areas without the necessity of introducing changes into the existing infrastructure. Thanks to the sensors they may be equipped with and their processing systems, mobile robots constitute a versatile alternative to solve a wide range of applications. When designing the control system of a mobile robot so that it carries out a task autonomously in an unknown environment, it is expected to take decisions about its localization in the environment and about the trajectory that it has to follow in order to arrive to the target points. More concisely, the robot has to find a relatively good solution to two crucial problems: building a model of the environment, and estimating the position of the robot within this model. In this work, we propose a framework to solve these problems using only visual information. The mobile robot is equipped with a catadioptric vision sensor that provides omnidirectional images from the environment. First, the robot goes along the trajectories to include in the model and uses the visual information captured to build this model. After that, the robot is able to estimate its position and orientation with respect to the trajectory. Among the possible approaches to solve these problems, global appearance techniques are used in this work. They have emerged recently as a robust and efficient alternative compared to landmark extraction techniques. A global description method based on Radon Transform is used to design mapping and localization algorithms and a set of images captured by a mobile robot in a real environment, under realistic operation conditions, is used to test the performance of these algorithms.
Mobile robots exploration through cnn-based reinforcement learning.
Tai, Lei; Liu, Ming
2016-01-01
Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.
Guarded Motion for Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The Idaho National Laboratory (INL) has created codes that ensure that a robot will come to a stop at a precise, specified distance from any obstacle regardless of the robot's initial speed, its physical characteristics, and the responsiveness of the low-level motor control schema. This Guarded Motion for Mobile Robots system iteratively adjusts the robot's action in response to information about the robot's environment.
Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.
Deng, Fucheng; Zhu, Xiaorui; He, Chao
2017-09-13
Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.
Mobility Systems For Robotic Vehicles
NASA Astrophysics Data System (ADS)
Chun, Wendell
1987-02-01
The majority of existing robotic systems can be decomposed into five distinct subsystems: locomotion, control/man-machine interface (MMI), sensors, power source, and manipulator. When designing robotic vehicles, there are two main requirements: first, to design for the environment and second, for the task. The environment can be correlated with known missions. This can be seen by analyzing existing mobile robots. Ground mobile systems are generally wheeled, tracked, or legged. More recently, underwater vehicles have gained greater attention. For example, Jason Jr. made history by surveying the sunken luxury liner, the Titanic. The next big surge of robotic vehicles will be in space. This will evolve as a result of NASA's commitment to the Space Station. The foreseeable robots will interface with current systems as well as standalone, free-flying systems. A space robotic vehicle is similar to its underwater counterpart with very few differences. Their commonality includes missions and degrees-of-freedom. The issues of stability and communication are inherent in both systems and environment.
Intelligent navigation and accurate positioning of an assist robot in indoor environments
NASA Astrophysics Data System (ADS)
Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke
2017-12-01
Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.
Fundamentals of soft robot locomotion
2017-01-01
Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human–robot interaction and locomotion. Although field applications have emerged for soft manipulation and human–robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. PMID:28539483
Fundamentals of soft robot locomotion.
Calisti, M; Picardi, G; Laschi, C
2017-05-01
Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human-robot interaction and locomotion. Although field applications have emerged for soft manipulation and human-robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. © 2017 The Author(s).
[Mobile autonomous robots-Possibilities and limits].
Maehle, E; Brockmann, W; Walthelm, A
2002-02-01
Besides industrial robots, which today are firmly established in production processes, service robots are becoming more and more important. They shall provide services for humans in different areas of their professional and everyday environment including medicine. Most of these service robots are mobile which requires an intelligent autonomous behaviour. After characterising the different kinds of robots the relevant paradigms of intelligent autonomous behaviour for mobile robots are critically discussed in this paper and illustrated by three concrete examples of robots realized in Lübeck. In addition a short survey of actual kinds of surgical robots as well as an outlook to future developments is given.
Numerical evaluation of mobile robot navigation in static indoor environment via EGAOR Iteration
NASA Astrophysics Data System (ADS)
Dahalan, A. A.; Saudi, A.; Sulaiman, J.; Din, W. R. W.
2017-09-01
One of the key issues in mobile robot navigation is the ability for the robot to move from an arbitrary start location to a specified goal location without colliding with any obstacles while traveling, also known as mobile robot path planning problem. In this paper, however, we examined the performance of a robust searching algorithm that relies on the use of harmonic potentials of the environment to generate smooth and safe path for mobile robot navigation in a static known indoor environment. The harmonic potentials will be discretized by using Laplacian’s operator to form a system of algebraic approximation equations. This algebraic linear system will be computed via 4-Point Explicit Group Accelerated Over-Relaxation (4-EGAOR) iterative method for rapid computation. The performance of the proposed algorithm will then be compared and analyzed against the existing algorithms in terms of number of iterations and execution time. The result shows that the proposed algorithm performed better than the existing methods.
Dynamic multisensor fusion for mobile robot navigation in an indoor environment
NASA Astrophysics Data System (ADS)
Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.
2001-10-01
In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.
NASA Technical Reports Server (NTRS)
Mann, R. C.; Fujimura, K.; Unseren, M. A.
1992-01-01
One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of position and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace.
Trajectory tracking control for a nonholonomic mobile robot under ROS
NASA Astrophysics Data System (ADS)
Lakhdar Besseghieur, Khadir; Trębiński, Radosław; Kaczmarek, Wojciech; Panasiuk, Jarosław
2018-05-01
In this paper, the implementation of the trajectory tracking control strategy on a ROS-based mobile robot is considered. Our test-bench is the nonholonomic mobile robot ‘TURTLEBOT’. ROS facilitates considerably setting-up a suitable environment to test the designed controller. Our aim is to develop a framework using ROS concepts so that a trajectory tracking controller can be implemented on any ROS-enabled mobile robot. Practical experiments with ‘TURTLEBOT’ are conducted to assess the framework reliability.
Analyzing Cyber-Physical Threats on Robotic Platforms.
Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J
2018-05-21
Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.
Analyzing Cyber-Physical Threats on Robotic Platforms †
2018-01-01
Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403
Learning for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.
2003-10-01
Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.
Robot map building based on fuzzy-extending DSmT
NASA Astrophysics Data System (ADS)
Li, Xinde; Huang, Xinhan; Wu, Zuyu; Peng, Gang; Wang, Min; Xiong, Youlun
2007-11-01
With the extensive application of mobile robots in many different fields, map building in unknown environments has been one of the principal issues in the field of intelligent mobile robot. However, Information acquired in map building presents characteristics of uncertainty, imprecision and even high conflict, especially in the course of building grid map using sonar sensors. In this paper, we extended DSmT with Fuzzy theory by considering the different fuzzy T-norm operators (such as Algebraic Product operator, Bounded Product operator, Einstein Product operator and Default minimum operator), in order to develop a more general and flexible combinational rule for more extensive application. At the same time, we apply fuzzy-extended DSmT to mobile robot map building with the help of new self-localization method based on neighboring field appearance matching( -NFAM), to make the new tool more robust in very complex environment. An experiment is conducted to reconstruct the map with the new tool in indoor environment, in order to compare their performances in map building with four T-norm operators, when Pioneer II mobile robot runs along the same trace. Finally, a conclusion is reached that this study develops a new idea to extend DSmT, also provides a new approach for autonomous navigation of mobile robot, and provides a human-computer interactive interface to manage and manipulate the robot remotely.
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
Space-time modeling using environmental constraints in a mobile robot system
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1990-01-01
Grid-based models of a robot's local environment have been used by many researchers building mobile robot control systems. The attraction of grid-based models is their clear parallel between the internal model and the external world. However, the discrete nature of such representations does not match well with the continuous nature of actions and usually serves to limit the abilities of the robot. This work describes a spatial modeling system that extracts information from a grid-based representation to form a symbolic representation of the robot's local environment. The approach makes a separation between the representation provided by the sensing system and the representation used by the action system. Separation allows asynchronous operation between sensing and action in a mobile robot, as well as the generation of a more continuous representation upon which to base actions.
Intelligent mobility research for robotic locomotion in complex terrain
NASA Astrophysics Data System (ADS)
Trentini, Michael; Beckman, Blake; Digney, Bruce; Vincent, Isabelle; Ricard, Benoit
2006-05-01
The objective of the Autonomous Intelligent Systems Section of Defence R&D Canada - Suffield is best described by its mission statement, which is "to augment soldiers and combat systems by developing and demonstrating practical, cost effective, autonomous intelligent systems capable of completing military missions in complex operating environments." The mobility requirement for ground-based mobile systems operating in urban settings must increase significantly if robotic technology is to augment human efforts in these roles and environments. The intelligence required for autonomous systems to operate in complex environments demands advances in many fields of robotics. This has resulted in large bodies of research in areas of perception, world representation, and navigation, but the problem of locomotion in complex terrain has largely been ignored. In order to achieve its objective, the Autonomous Intelligent Systems Section is pursuing research that explores the use of intelligent mobility algorithms designed to improve robot mobility. Intelligent mobility uses sensing, control, and learning algorithms to extract measured variables from the world, control vehicle dynamics, and learn by experience. These algorithms seek to exploit available world representations of the environment and the inherent dexterity of the robot to allow the vehicle to interact with its surroundings and produce locomotion in complex terrain. The primary focus of the paper is to present the intelligent mobility research within the framework of the research methodology, plan and direction defined at Defence R&D Canada - Suffield. It discusses the progress and future direction of intelligent mobility research and presents the research tools, topics, and plans to address this critical research gap. This research will create effective intelligence to improve the mobility of ground-based mobile systems operating in urban settings to assist the Canadian Forces in their future urban operations.
Object Detection Techniques Applied on Mobile Robot Semantic Navigation
Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto
2014-01-01
The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency. PMID:24732101
Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents
2016-07-27
synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot
Ultra wide-band localization and SLAM: a comparative study for mobile robot navigation.
Segura, Marcelo J; Auat Cheein, Fernando A; Toibero, Juan M; Mut, Vicente; Carelli, Ricardo
2011-01-01
In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work.
Object Transportation by Two Mobile Robots with Hand Carts
Hara, Tatsunori
2014-01-01
This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50–60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement. PMID:27433499
Object Transportation by Two Mobile Robots with Hand Carts.
Sakuyama, Takuya; Figueroa Heredia, Jorge David; Ogata, Taiki; Hara, Tatsunori; Ota, Jun
2014-01-01
This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50-60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement.
Path planning in GPS-denied environments via collective intelligence of distributed sensor networks
NASA Astrophysics Data System (ADS)
Jha, Devesh K.; Chattopadhyay, Pritthi; Sarkar, Soumik; Ray, Asok
2016-05-01
This paper proposes a framework for reactive goal-directed navigation without global positioning facilities in unknown dynamic environments. A mobile sensor network is used for localising regions of interest for path planning of an autonomous mobile robot. The underlying theory is an extension of a generalised gossip algorithm that has been recently developed in a language-measure-theoretic setting. The algorithm has been used to propagate local decisions of target detection over a mobile sensor network and thus, it generates a belief map for the detected target over the network. In this setting, an autonomous mobile robot may communicate only with a few mobile sensing nodes in its own neighbourhood and localise itself relative to the communicating nodes with bounded uncertainties. The robot makes use of the knowledge based on the belief of the mobile sensors to generate a sequence of way-points, leading to a possible goal. The estimated way-points are used by a sampling-based motion planning algorithm to generate feasible trajectories for the robot. The proposed concept has been validated by numerical simulation on a mobile sensor network test-bed and a Dubin's car-like robot.
NASA Astrophysics Data System (ADS)
Zheng, Taixiong
2005-12-01
A neuro-fuzzy network based approach for robot motion in an unknown environment was proposed. In order to control the robot motion in an unknown environment, the behavior of the robot was classified into moving to the goal and avoiding obstacles. Then, according to the dynamics of the robot and the behavior character of the robot in an unknown environment, fuzzy control rules were introduced to control the robot motion. At last, a 6-layer neuro-fuzzy network was designed to merge from what the robot sensed to robot motion control. After being trained, the network may be used for robot motion control. Simulation results show that the proposed approach is effective for robot motion control in unknown environment.
Adaptive Tracking Control for Robots With an Interneural Computing Scheme.
Tsai, Feng-Sheng; Hsu, Sheng-Yi; Shih, Mau-Hsiang
2018-04-01
Adaptive tracking control of mobile robots requires the ability to follow a trajectory generated by a moving target. The conventional analysis of adaptive tracking uses energy minimization to study the convergence and robustness of the tracking error when the mobile robot follows a desired trajectory. However, in the case that the moving target generates trajectories with uncertainties, a common Lyapunov-like function for energy minimization may be extremely difficult to determine. Here, to solve the adaptive tracking problem with uncertainties, we wish to implement an interneural computing scheme in the design of a mobile robot for behavior-based navigation. The behavior-based navigation adopts an adaptive plan of behavior patterns learning from the uncertainties of the environment. The characteristic feature of the interneural computing scheme is the use of neural path pruning with rewards and punishment interacting with the environment. On this basis, the mobile robot can be exploited to change its coupling weights in paths of neural connections systematically, which can then inhibit or enhance the effect of flow elimination in the dynamics of the evolutionary neural network. Such dynamical flow translation ultimately leads to robust sensory-to-motor transformations adapting to the uncertainties of the environment. A simulation result shows that the mobile robot with the interneural computing scheme can perform fault-tolerant behavior of tracking by maintaining suitable behavior patterns at high frequency levels.
NASA Technical Reports Server (NTRS)
Parness, Aaron
2012-01-01
Three robots that extend microspine technology to enable advanced mobility are presented. First, the Durable Reconnaissance and Observation Platform (DROP) and the ReconRobotics Scout platform use a new rotary configuration of microspines to provide improved soldier-portable reconnaissance by moving rapidly over curbs and obstacles, transitioning from horizontal to vertical surfaces, climbing rough walls and surviving impacts. Next, the four-legged LEMUR robot uses new configurations of opposed microspines to anchor to both manmade and natural rough surfaces. Using these anchors as feet enables mobility in unstructured environments, from urban disaster areas to deserts and caves.
A mobile robots experimental environment with event-based wireless communication.
Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián
2013-07-22
An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-12-26
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.
Have I Been Here Before? A Method for Detecting Loop Closure With LiDAR
2015-01-01
mobile robot system, which has the unfortunate task of exploring a system of austere underground tunnels with only a laser scanner as a guide. 15...INTENTIONALLY LEFT BLANK. 1 1. Introduction Techniques for using mobile robots to generate detailed maps of different environments...durations. This is especially true for applications involving small mobile robots where sensor drift and inaccuracies can cause significant mistakes
Energy optimization in mobile sensor networks
NASA Astrophysics Data System (ADS)
Yu, Shengwei
Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.
Hierarchical Modelling Of Mobile, Seeing Robots
NASA Astrophysics Data System (ADS)
Luh, Cheng-Jye; Zeigler, Bernard P.
1990-03-01
This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
Hierarchical modelling of mobile, seeing robots
NASA Technical Reports Server (NTRS)
Luh, Cheng-Jye; Zeigler, Bernard P.
1990-01-01
This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
From Sci-Fi to Reality--Mobile Robots Get the Job Done
ERIC Educational Resources Information Center
Roman, Harry T.
2006-01-01
Robots are simply computers that can interact with their environment. Some are fixed in place in industrial assembly plants for cars, appliances, micro electronic circuitry, and pharmaceuticals. Another important category of robots is the mobiles, machines that can be driven to the workplace, often designed for hazardous duty operation or…
Ultra Wide-Band Localization and SLAM: A Comparative Study for Mobile Robot Navigation
Segura, Marcelo J.; Auat Cheein, Fernando A.; Toibero, Juan M.; Mut, Vicente; Carelli, Ricardo
2011-01-01
In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work. PMID:22319397
Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois
2017-07-01
A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.
Multisensor-based human detection and tracking for mobile service robots.
Bellotto, Nicola; Hu, Huosheng
2009-02-01
One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.
2016-01-01
satisfying journeys in my life. I would like to thank Ryan for his guidance through the truly exciting world of mobile robotics and robotic perception. Thank...Multi-session and Multi-robot SLAM . . . . . . . . . . . . . . . 15 1.3.3 Robust Techniques for SLAM Backends . . . . . . . . . . . . . . 18 1.4 A...sonar. xv CHAPTER 1 Introduction 1.1 The Importance of SLAM in Autonomous Robotics Autonomous mobile robots are becoming a promising aid in a wide
Investigation of human-robot interface performance in household environments
NASA Astrophysics Data System (ADS)
Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.
2016-05-01
Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.
NASA Astrophysics Data System (ADS)
Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques
2005-06-01
The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.
A Mobile Service Robot for Life Science Laboratories
NASA Astrophysics Data System (ADS)
Schulenburg, Erik; Elkmann, Norbert; Fritzsche, Markus; Teutsch, Christian
In this paper we presents a project that is developing a mobile service robot to assist users in biological and pharmaceutical laboratories by executing routine jobs such as filling and transporting microplates. A preliminary overview of the design of the mobile platform with a robotic arm is provided. Safety aspects are one focus of the project since the robot and humans will share a common environment. Hence, several safety sensors such as laser scanners, thermographie components and artificial skin are employed. These are described along with the approaches to object recognition.
Peer-to-peer model for the area coverage and cooperative control of mobile sensor networks
NASA Astrophysics Data System (ADS)
Tan, Jindong; Xi, Ning
2004-09-01
This paper presents a novel model and distributed algorithms for the cooperation and redeployment of mobile sensor networks. A mobile sensor network composes of a collection of wireless connected mobile robots equipped with a variety of sensors. In such a sensor network, each mobile node has sensing, computation, communication, and locomotion capabilities. The locomotion ability enhances the autonomous deployment of the system. The system can be rapidly deployed to hostile environment, inaccessible terrains or disaster relief operations. The mobile sensor network is essentially a cooperative multiple robot system. This paper first presents a peer-to-peer model to define the relationship between neighboring communicating robots. Delaunay Triangulation and Voronoi diagrams are used to define the geometrical relationship between sensor nodes. This distributed model allows formal analysis for the fusion of spatio-temporal sensory information of the network. Based on the distributed model, this paper discusses a fault tolerant algorithm for autonomous self-deployment of the mobile robots. The algorithm considers the environment constraints, the presence of obstacles and the nonholonomic constraints of the robots. The distributed algorithm enables the system to reconfigure itself such that the area covered by the system can be enlarged. Simulation results have shown the effectiveness of the distributed model and deployment algorithms.
People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments
NASA Astrophysics Data System (ADS)
Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.
People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.
Mobile app for human-interaction with sitter robots
NASA Astrophysics Data System (ADS)
Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.
2017-05-01
Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that enable a patient sitter HMI, and we include experimental results with a small number of users that demonstrate that the concept is sound and scalable.
A Mobile Robots Experimental Environment with Event-Based Wireless Communication
Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián
2013-01-01
An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented. PMID:23881139
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.
Robust mobility in human-populated environments
NASA Astrophysics Data System (ADS)
Gonzalez, Juan Pablo; Phillips, Mike; Neuman, Brad; Likhachev, Max
2012-06-01
Creating robots that can help humans in a variety of tasks requires robust mobility and the ability to safely navigate among moving obstacles. This paper presents an overview of recent research in the Robotics Collaborative Technology Alliance (RCTA) that addresses many of the core requirements for robust mobility in human-populated environments. Safe Interval Path Planning (SIPP) allows for very fast planning in dynamic environments when planning timeminimal trajectories. Generalized Safe Interval Path Planning extends this concept to trajectories that minimize arbitrary cost functions. Finally, generalized PPCP algorithm is used to generate plans that reason about the uncertainty in the predicted trajectories of moving obstacles and try to actively disambiguate the intentions of humans whenever necessary. We show how these approaches consider moving obstacles and temporal constraints and produce high-fidelity paths. Experiments in simulated environments show the performance of the algorithms under different controlled conditions, and experiments on physical mobile robots interacting with humans show how the algorithms perform under the uncertainties of the real world.
NASA Astrophysics Data System (ADS)
Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.
2002-10-01
In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.
A novel traveling wave piezoelectric actuated tracked mobile robot utilizing friction effect
NASA Astrophysics Data System (ADS)
Wang, Liang; Shu, Chengyou; Jin, Jiamei; Zhang, Jianhui
2017-03-01
A novel traveling wave piezoelectric-actuated tracked mobile robot with potential application to robotic rovers was proposed and investigated in this study. The proposed tracked mobile robot is composed of a parallelogram-frame-structure piezoelectric transducer with four rings and a metal track. Utilizing the converse piezoelectric and friction effects, traveling waves were propagated in the rings and then the metal track was actuated by the piezoelectric transducer. Compared with traditional tracked mechanisms, the proposed tracked mobile robot has a simpler and more compact structure without lubricant, which eliminates the problem of lubricant volatilization and deflation, thus, it could be operated in the vacuum environment. Dynamic characteristics were simulated and measured to reveal the mechanism of actuating track of the piezoelectric transducer. Experimental investigations of the traveling wave piezoelectric-actuated tracked mobile robot were then carried out, and the results indicated that the robot prototype with a pair of exciting voltages of 460 Vpp is able to achieve a maximum velocity of 57 mm s-1 moving on the foam plate and possesses the obstacle crossing capability with a maximum height of 27 mm. The proposed tracked mobile robot exhibits potential to be the driving system of robotic rovers.
Event-Based Control Strategy for Mobile Robots in Wireless Environments.
Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto
2015-12-02
In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy.
Event-Based Control Strategy for Mobile Robots in Wireless Environments
Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto
2015-01-01
In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy. PMID:26633412
Research and development of electric vehicles for clean transportation.
Wada, Masayoshi
2009-01-01
This article presents the research and development of an electric vehicle (EV) in Department of Human-Robotics Saitama Institute of Technology, Japan. Electric mobile systems developed in our laboratory include a converted electric automobile, electric wheelchair and personal mobile robot. These mobile systems contribute to realize clean transportation since energy sources and devices from all vehicles, i.e., batteries and electric motors, does not deteriorate the environment. To drive motors for vehicle traveling, robotic technologies were applied.
Assistance System for Disabled People: A Robot Controlled by Blinking and Wireless Link
NASA Astrophysics Data System (ADS)
Del Val, Lara; Jiménez, María I.; Alonso, Alonso; de La Rosa, Ramón; Izquierdo, Alberto; Carrera, Albano
Disabled people already profit from a lot of technical assistance that improves their quality of life. This article presents a system which will allow interaction between a physically disabled person and his environment. This system is controlled by voluntary muscular movements, particularly those of face muscles. These movements will be translated into machine-understandable instructions, and they will be sent by means of a wireless link to a mobile robot that will execute them. Robot includes a video camera, in order to show the user the environment of the route that the robot follows. This system gives a greater personal autonomy to people with reduced mobility.
2017-06-01
implement human following on a mobile robot in an indoor environment . B. FUTURE WORK Future work that could be conducted in the realm of this thesis...FEASIBILITY OF CONDUCTING HUMAN TRACKING AND FOLLOWING IN AN INDOOR ENVIRONMENT USING A MICROSOFT KINECT AND THE ROBOT OPERATING SYSTEM by...FEASIBILITY OF CONDUCTING HUMAN TRACKING AND FOLLOWING IN AN INDOOR ENVIRONMENT USING A MICROSOFT KINECT AND THE ROBOT OPERATING SYSTEM 5. FUNDING NUMBERS
Shahriari, Mohammadali; Biglarbegian, Mohammad
2018-01-01
This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.
1992-03-01
This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
Autonomous Mobile Platform for Research in Cooperative Robotics
NASA Technical Reports Server (NTRS)
Daemi, Ali; Pena, Edward; Ferguson, Paul
1998-01-01
This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-01-01
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766
NASA Technical Reports Server (NTRS)
Agah, Arvin; Bekey, George A.
1994-01-01
This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.
Robotic vehicle with multiple tracked mobility platforms
Salton, Jonathan R [Albuquerque, NM; Buttz, James H [Albuquerque, NM; Garretson, Justin [Albuquerque, NM; Hayward, David R [Wetmore, CO; Hobart, Clinton G [Albuquerque, NM; Deuel, Jr., Jamieson K.
2012-07-24
A robotic vehicle having two or more tracked mobility platforms that are mechanically linked together with a two-dimensional coupling, thereby forming a composite vehicle of increased mobility. The robotic vehicle is operative in hazardous environments and can be capable of semi-submersible operation. The robotic vehicle is capable of remote controlled operation via radio frequency and/or fiber optic communication link to a remote operator control unit. The tracks have a plurality of track-edge scallop cut-outs that allow the tracks to easily grab onto and roll across railroad tracks, especially when crossing the railroad tracks at an oblique angle.
Towards Principled Experimental Study of Autonomous Mobile Robots
NASA Technical Reports Server (NTRS)
Gat, Erann
1995-01-01
We review the current state of research in autonomous mobile robots and conclude that there is an inadequate basis for predicting the reliability and behavior of robots operating in unengineered environments. We present a new approach to the study of autonomous mobile robot performance based on formal statistical analysis of independently reproducible experiments conducted on real robots. Simulators serve as models rather than experimental surrogates. We demonstrate three new results: 1) Two commonly used performance metrics (time and distance) are not as well correlated as is often tacitly assumed. 2) The probability distributions of these performance metrics are exponential rather than normal, and 3) a modular, object-oriented simulation accurately predicts the behavior of the real robot in a statistically significant manner.
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1993-01-01
A formulation that makes possible the integration of collision prediction and avoidance stages for mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the distance between the robot and a moving obstacle is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. A feedback control is developed and local asymptotic stability is proved if the velocity of the moving obstacle is bounded. Furthermore, a solution to the problem of inverse dynamics for the mobile robot is given. Simulation results verify the value of the proposed strategy.
Rice-obot 1: An intelligent autonomous mobile robot
NASA Technical Reports Server (NTRS)
Defigueiredo, R.; Ciscon, L.; Berberian, D.
1989-01-01
The Rice-obot I is the first in a series of Intelligent Autonomous Mobile Robots (IAMRs) being developed at Rice University's Cooperative Intelligent Mobile Robots (CIMR) lab. The Rice-obot I is mainly designed to be a testbed for various robotic and AI techniques, and a platform for developing intelligent control systems for exploratory robots. Researchers present the need for a generalized environment capable of combining all of the control, sensory and knowledge systems of an IAMR. They introduce Lisp-Nodes as such a system, and develop the basic concepts of nodes, messages and classes. Furthermore, they show how the control system of the Rice-obot I is implemented as sub-systems in Lisp-Nodes.
Using qualitative maps to direct reactive robots
NASA Technical Reports Server (NTRS)
Bertin, Randolph; Pendleton, Tom
1992-01-01
The principal advantage of mobile robots is that they are able to go to specific locations to perform useful tasks rather than have the tasks brought to them. It is important therefore that the robot be used to reach desired locations efficiently and reliably. A mobile robot whose environment extends significantly beyond its sensory horizon must maintain a representation of the environment, a map, in order to attain these efficiency and reliability requirements. We believe that qualitative mapping methods provide useful and robust representation schemes and that such maps may be used to direct the actions of a reactively controlled robot. In this paper we describe our experience in employing qualitative maps to direct, through the selection of desired control strategies, a reactive-behavior based robot. This mapping capability represents the development of one aspect of a successful deliberative/reactive hybrid control architecture.
Teleautonomous guidance for mobile robots
NASA Technical Reports Server (NTRS)
Borenstein, J.; Koren, Y.
1990-01-01
Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.
Multicriteria adaptation principle on example of groups of mobile robots
NASA Astrophysics Data System (ADS)
Nelyubin, A. P.; Misyurin, S. Yu
2017-12-01
The article presents a multicriteria approach to the adaptation of groups of search, explore or research robots to unknown and volatile environment conditions. The basis of this approach is the application of multicriteria analysis both at the design stage of a group of mobile robots and at the stage of its adaptation in real-time conditions. It is proposed to maintain a variety of robots by properties and by optimality criteria in order to take into account the preferred mode of operation.
Training a Network of Electronic Neurons for Control of a Mobile Robot
NASA Astrophysics Data System (ADS)
Vromen, T. G. M.; Steur, E.; Nijmeijer, H.
An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.
Mobile robot trajectory tracking using noisy RSS measurements: an RFID approach.
Miah, M Suruz; Gueaieb, Wail
2014-03-01
Most RF beacons-based mobile robot navigation techniques rely on approximating line-of-sight (LOS) distances between the beacons and the robot. This is mostly performed using the robot's received signal strength (RSS) measurements from the beacons. However, accurate mapping between the RSS measurements and the LOS distance is almost impossible to achieve in reverberant environments. This paper presents a partially-observed feedback controller for a wheeled mobile robot where the feedback signal is in the form of noisy RSS measurements emitted from radio frequency identification (RFID) tags. The proposed controller requires neither an accurate mapping between the LOS distance and the RSS measurements, nor the linearization of the robot model. The controller performance is demonstrated through numerical simulations and real-time experiments. ©2013 Published by ISA. All rights reserved.
An optimal control strategy for collision avoidance of mobile robots in non-stationary environments
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An optimal control formulation of the problem of collision avoidance of mobile robots in environments containing moving obstacles is presented. Collision avoidance is guaranteed if the minimum distance between the robot and the objects is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Furthermore, time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. Simulation results verify the value of the proposed strategy.
Determining robot actions for tasks requiring sensor interaction
NASA Technical Reports Server (NTRS)
Budenske, John; Gini, Maria
1989-01-01
The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.
A Mobile Sensor Network System for Monitoring of Unfriendly Environments.
Song, Guangming; Zhou, Yaoxin; Ding, Fei; Song, Aiguo
2008-11-14
Observing microclimate changes is one of the most popular applications of wireless sensor networks. However, some target environments are often too dangerous or inaccessible to humans or large robots and there are many challenges for deploying and maintaining wireless sensor networks in those unfriendly environments. This paper presents a mobile sensor network system for solving this problem. The system architecture, the mobile node design, the basic behaviors and advanced network capabilities have been investigated respectively. A wheel-based robotic node architecture is proposed here that can add controlled mobility to wireless sensor networks. A testbed including some prototype nodes has also been created for validating the basic functions of the proposed mobile sensor network system. Motion performance tests have been done to get the positioning errors and power consumption model of the mobile nodes. Results of the autonomous deployment experiment show that the mobile nodes can be distributed evenly into the previously unknown environments. It provides powerful support for network deployment and maintenance and can ensure that the sensor network will work properly in unfriendly environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.; Fujimura, K.; Unseren, M.A.
One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR)at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of positionmore » and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace. 15 refs., 3 figs.« less
Calculating distance by wireless ethernet signal strength for global positioning method
NASA Astrophysics Data System (ADS)
Kim, Seung-Yong; Kim, Jeehong; Lee, Chang-goo
2005-12-01
This paper investigated mobile robot localization by using wireless Ethernet for global localization and INS for relative localization. For relative localization, the low-cost INS features self-contained was adopted. Low-cost MEMS-based INS has a short-period response and acceptable performance. Generally, variety sensor was used for mobile robot localization. In spite of precise modeling of the sensor, it leads inevitably to the accumulation of errors. The IEEE802.11b wireless Ethernet standard has been deployed in office building, museums, hospitals, shopping centers and other indoor environments. Many mobile robots already make use of wireless networking for communication. So location sensing with wireless Ethernet might be very useful for a low-cost robot. This research used wireless Ethernet card for compensation the accumulation of errors. So the mobile robot can use that for global localization through the installed many IEEE802.11b wireless Ethernets in indoor environments. The chief difficulty in localization with wireless Ethernet is predicting signal strength. As a sensor, RF signal strength measured indoors is non-linear with distance. So, there made the profiles of signal strength for points and used that. We wrote using function between signal strength profile and distance from the wireless Ethernet point.
Automatic Operation For A Robot Lawn Mower
NASA Astrophysics Data System (ADS)
Huang, Y. Y.; Cao, Z. L.; Oh, S. J.; Kattan, E. U.; Hall, E. L.
1987-02-01
A domestic mobile robot, lawn mower, which performs the automatic operation mode, has been built up in the Center of Robotics Research, University of Cincinnati. The robot lawn mower automatically completes its work with the region filling operation, a new kind of path planning for mobile robots. Some strategies for region filling of path planning have been developed for a partly-known or a unknown environment. Also, an advanced omnidirectional navigation system and a multisensor-based control system are used in the automatic operation. Research on the robot lawn mower, especially on the region filling of path planning, is significant in industrial and agricultural applications.
Arash: A social robot buddy to support children with cancer in a hospital environment.
Meghdari, Ali; Shariati, Azadeh; Alemi, Minoo; Vossoughi, Gholamreza R; Eydi, Abdollah; Ahmadi, Ehsan; Mozafari, Behrad; Amoozandeh Nobaveh, Ali; Tahami, Reza
2018-06-01
This article presents the thorough design procedure, specifications, and performance of a mobile social robot friend Arash for educational and therapeutic involvement of children with cancer based on their interests and needs. Our research focuses on employing Arash in a pediatric hospital environment to entertain, assist, and educate children with cancer who suffer from physical pain caused by both the disease and its treatment process. Since cancer treatment causes emotional distress, which can reduce the efficiency of medications, using social robots to interact with children with cancer in a hospital environment could decrease this distress, thereby improving the effectiveness of their treatment. Arash is a 15 degree-of-freedom low-cost humanoid mobile robot buddy, carefully designed with appropriate measures and developed to interact with children ages 5-12 years old. The robot has five physical subsystems: the head, arms, torso, waist, and mobile-platform. The robot's final appearance is a significant novel concept; since it was selected based on a survey taken from 50 children with chronic diseases at three pediatric hospitals in Tehran, Iran. Founded on these measures and desires, Arash was designed, built, improved, and enhanced to operate successfully in pediatric cancer hospitals. Two experiments were devised to evaluate the children's level of acceptance and involvement with the robot, assess their feelings about it, and measure how much the robot was similar to the favored conceptual sketch. Both experiments were conducted in the form of storytelling and appearance/performance evaluations. The obtained results confirm high engagement and interest of pediatric cancer patients with the constructed robot.
Robot navigation research using the HERMIES mobile robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, D.L.
1989-01-01
In recent years robot navigation has attracted much attention from researchers around the world. Not only are theoretical studies being simulated on sophisticated computers, but many mobile robots are now used as test vehicles for these theoretical studies. Various algorithms have been perfected for navigation in a known static environment; but navigation in an unknown and dynamic environment poses a much more challenging problem for researchers. Many different methodologies have been developed for autonomous robot navigation, but each methodology is usually restricted to a particular type of environment. One important research focus of the Center for Engineering Systems Advanced researchmore » (CESAR) at Oak Ridge National Laboratory, is autonomous navigation in unknown and dynamic environments using the series of HERMIES mobile robots. The research uses an expert system for high-level planning interfaced with C-coded routines for implementing the plans, and for quick processing of data requested by the expert system. In using this approach, the navigation is not restricted to one methodology since the expert system can activate a rule module for the methodology best suited for the current situation. Rule modules can be added the rule base as they are developed and tested. Modules are being developed or enhanced for navigating from a map, searching for a target, exploring, artificial potential-field navigation, navigation using edge-detection, etc. This paper will report on the various rule modules and methods of navigation in use, or under development at CESAR, using the HERMIES-IIB robot as a testbed. 13 refs., 5 figs., 1 tab.« less
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhancedmore » by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.« less
Trajectory planning and optimal tracking for an industrial mobile robot
NASA Astrophysics Data System (ADS)
Hu, Huosheng; Brady, J. Michael; Probert, Penelope J.
1994-02-01
This paper introduces a unified approach to trajectory planning and tracking for an industrial mobile robot subject to non-holonomic constraints. We show (1) how a smooth trajectory is generated that takes into account the constraints from the dynamic environment and the robot kinematics; and (2) how a general predictive controller works to provide optimal tracking capability for nonlinear systems. The tracking performance of the proposed guidance system is analyzed by simulation.
Terrain interaction with the quarter scale beam walker
NASA Technical Reports Server (NTRS)
Chun, Wendell H.; Price, S.; Spiessbach, A.
1990-01-01
Frame walkers are a class of mobile robots that are robust and capable mobility platforms. Variations of the frame walker robot are in commercial use today. Komatsu Ltd. of Japan developed the Remotely Controlled Underwater Surveyor (ReCUS) and Normed Shipyards of France developed the Marine Robot (RM3). Both applications of the frame walker concept satisfied robotic mobility requirements that could not be met by a wheeled or tracked design. One vehicle design concept that falls within this class of mobile robots is the walking beam. A one-quarter scale prototype of the walking beam was built by Martin Marietta to evaluate the potential merits of utilizing the vehicle as a planetary rover. The initial phase of prototype rover testing was structured to evaluate the mobility performance aspects of the vehicle. Performance parameters such as vehicle power, speed, and attitude control were evaluated as a function of the environment in which the prototype vehicle was tested. Subsequent testing phases will address the integrated performance of the vehicle and a local navigation system.
Terrain Interaction With The Quarter Scale Beam Walker
NASA Astrophysics Data System (ADS)
Chun, Wendell H.; Price, R. S.; Spiessbach, Andrew J.
1990-03-01
Frame walkers are a class of mobile robots that are robust and capable mobility platforms. Variations of the frame walker robot are in commercial use today. Komatsu Ltd. of Japan developed the Remotely Controlled Underwater Surveyor (ReCUS) and Normed Shipyards of France developed the Marine Robot (RM3). Both applications of the frame walker concept satisfied robotic mobility requirements that could not be met by a wheeled or tracked design. One vehicle design concept that falls within this class of mobile robots is the walking beam. A one-quarter scale prototype of the walking beam was built by Martin Marietta to evaluate the potential merits of utilizing the vehicle as a planetary rover. The initial phase of prototype rover testing was structured to evaluate the mobility performance aspects of the vehicle. Performance parameters such as vehicle power, speed, and attitude control were evaluated as a function of the environment in which the prototype vehicle was tested. Subsequent testing phases will address the integrated performance of the vehicle and a local navigation system.
Classification of Odours for Mobile Robots Using an Ensemble of Linear Classifiers
NASA Astrophysics Data System (ADS)
Trincavelli, Marco; Coradeschi, Silvia; Loutfi, Amy
2009-05-01
This paper investigates the classification of odours using an electronic nose mounted on a mobile robot. The samples are collected as the robot explores the environment. Under such conditions, the sensor response differs from typical three phase sampling processes. In this paper, we focus particularly on the classification problem and how it is influenced by the movement of the robot. To cope with these influences, an algorithm consisting of an ensemble of classifiers is presented. Experimental results show that this algorithm increases classification performance compared to other traditional classification methods.
NASA Astrophysics Data System (ADS)
Girach, Khalid; Bouazza-Marouf, K.; Kerr, David; Hewit, Jim
1994-11-01
The paper describes the investigations carried out to implement a line of sight control and communication link for a mobile robot vehicle for use in structured nuclear semi-hazardous environments. Line of sight free space optical laser communication links for remote teleoperation have important applications in hazardous environments. They have certain advantages over radio/microwave links and umbilical control such as greater protection against generation of and susceptance to electro-magnetic fields. The cable-less environment provides increased integrity and mechanical freedom to the mobile robot. However, to maintain the communication link, continuous point and tracking is required between the base station and the mobile vehicle. This paper presents a novel two ended optical tracking system utilizing the communication laser beams and photodetectors. The mobile robot is a six wheel drive vehicle with a manipulator arm which can operate in a variety of terrain. The operator obtains visual feedback information from cameras placed on the vehicle. From this information, the speed and direction of the vehicle can be controlled from a joystick panel. We describe the investigations carried out for the communication of analogue video and digital data signals over the laser link for speed and direction control.
Gas Source Localization via Behaviour Based Mobile Robot and Weighted Arithmetic Mean
NASA Astrophysics Data System (ADS)
Yeon, Ahmad Shakaff Ali; Kamarudin, Kamarulzaman; Visvanathan, Retnam; Mamduh Syed Zakaria, Syed Muhammad; Zakaria, Ammar; Munirah Kamarudin, Latifah
2018-03-01
This work is concerned with the localization of gas source in dynamic indoor environment using a single mobile robot system. Algorithms such as Braitenberg, Zig-Zag and the combination of the two were implemented on the mobile robot as gas plume searching and tracing behaviours. To calculate the gas source location, a weighted arithmetic mean strategy was used. All experiments were done on an experimental testbed consisting of a large gas sensor array (LGSA) to monitor real-time gas concentration within the testbed. Ethanol gas was released within the testbed and the source location was marked using a pattern that can be tracked by a pattern tracking system. A pattern template was also mounted on the mobile robot to track the trajectory of the mobile robot. Measurements taken by the mobile robot and the LGSA were then compared to verify the experiments. A combined total of 36.5 hours of real time experimental runs were done and the typical results from such experiments were presented in this paper. From the results, we obtained gas source localization errors between 0.4m to 1.2m from the real source location.
Rentschler, M E; Dumpert, J; Platt, S R; Ahmed, S I; Farritor, S M; Oleynikov, D
2006-01-01
The use of small incisions in laparoscopy reduces patient trauma, but also limits the surgeon's ability to view and touch the surgical environment directly. These limitations generally restrict the application of laparoscopy to procedures less complex than those performed during open surgery. Although current robot-assisted laparoscopy improves the surgeon's ability to manipulate and visualize the target organs, the instruments and cameras remain fundamentally constrained by the entry incisions. This limits tool tip orientation and optimal camera placement. The current work focuses on developing a new miniature mobile in vivo adjustable-focus camera robot to provide sole visual feedback to surgeons during laparoscopic surgery. A miniature mobile camera robot was inserted through a trocar into the insufflated abdominal cavity of an anesthetized pig. The mobile robot allowed the surgeon to explore the abdominal cavity remotely and view trocar and tool insertion and placement without entry incision constraints. The surgeon then performed a cholecystectomy using the robot camera alone for visual feedback. This successful trial has demonstrated that miniature in vivo mobile robots can provide surgeons with sufficient visual feedback to perform common procedures while reducing patient trauma.
Constrained motion model of mobile robots and its applications.
Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong
2009-06-01
Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.
Ego-location and situational awareness in semistructured environments
NASA Astrophysics Data System (ADS)
Goodsell, Thomas G.; Snorrason, Magnus S.; Stevens, Mark R.; Stube, Brian; McBride, Jonah
2003-09-01
The success of any potential application for mobile robots depends largely on the specific environment where the application takes place. Practical applications are rarely found in highly structured environments, but unstructured environments (such as natural terrain) pose major challenges to any mobile robot. We believe that semi-structured environments-such as parking lots-provide a good opportunity for successful mobile robot applications. Parking lots tend to be flat and smooth, and cars can be uniquely identified by their license plates. Our scenario is a parking lot where only known vehicles are supposed to park. The robot looks for vehicles that do not belong in the parking lot. It checks both license plates and vehicle types, in case the plate is stolen from an approved vehicle. It operates autonomously, but reports back to a guard who verifies its performance. Our interest is in developing the robot's vision system, which we call Scene Estimation & Situational Awareness Mapping Engine (SESAME). In this paper, we present initial results from the development of two SESAME subsystems, the ego-location and license plate detection systems. While their ultimate goals are obviously quite different, our design demonstrates that by sharing intermediate results, both tasks can be significantly simplified. The inspiration for this design approach comes from the basic tenets of Situational Awareness (SA), where the benefits of holistic perception are clearly demonstrated over the more typical designs that attempt to solve each sensing/perception problem in isolation.
A fuzzy logic controller for an autonomous mobile robot
NASA Technical Reports Server (NTRS)
Yen, John; Pfluger, Nathan
1993-01-01
The ability of a mobile robot system to plan and move intelligently in a dynamic system is needed if robots are to be useful in areas other than controlled environments. An example of a use for this system is to control an autonomous mobile robot in a space station, or other isolated area where it is hard or impossible for human life to exist for long periods of time (e.g., Mars). The system would allow the robot to be programmed to carry out the duties normally accomplished by a human being. Some of the duties that could be accomplished include operating instruments, transporting objects, and maintenance of the environment. The main focus of our early work has been on developing a fuzzy controller that takes a path and adapts it to a given environment. The robot only uses information gathered from the sensors, but retains the ability to avoid dynamically placed obstacles near and along the path. Our fuzzy logic controller is based on the following algorithm: (1) determine the desired direction of travel; (2) determine the allowed direction of travel; and (3) combine the desired and allowed directions in order to determine a direciton that is both desired and allowed. The desired direction of travel is determined by projecting ahead to a point along the path that is closer to the goal. This gives a local direction of travel for the robot and helps to avoid obstacles.
Mobile robot motion estimation using Hough transform
NASA Astrophysics Data System (ADS)
Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu
2018-05-01
This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.
Positional estimation techniques for an autonomous mobile robot
NASA Technical Reports Server (NTRS)
Nandhakumar, N.; Aggarwal, J. K.
1990-01-01
Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented.
High-Performance 3D Articulated Robot Display
NASA Technical Reports Server (NTRS)
Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy
2011-01-01
In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.
Using advanced computer vision algorithms on small mobile robots
NASA Astrophysics Data System (ADS)
Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.
2006-05-01
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.
Mapping of unknown industrial plant using ROS-based navigation mobile robot
NASA Astrophysics Data System (ADS)
Priyandoko, G.; Ming, T. Y.; Achmad, M. S. H.
2017-10-01
This research examines how humans work with teleoperated unmanned mobile robot inspection in industrial plant area resulting 2D/3D map for further critical evaluation. This experiment focuses on two parts, the way human-robot doing remote interactions using robust method and the way robot perceives the environment surround as a 2D/3D perspective map. ROS (robot operating system) as a tool was utilized in the development and implementation during the research which comes up with robust data communication method in the form of messages and topics. RGBD SLAM performs the visual mapping function to construct 2D/3D map using Kinect sensor. The results showed that the mobile robot-based teleoperated system are successful to extend human perspective in term of remote surveillance in large area of industrial plant. It was concluded that the proposed work is robust solution for large mapping within an unknown construction building.
Simulation tools for robotics research and assessment
NASA Astrophysics Data System (ADS)
Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.
2016-05-01
The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.
Real-time obstacle avoidance using harmonic potential functions
NASA Technical Reports Server (NTRS)
Kim, Jin-Oh; Khosla, Pradeep K.
1992-01-01
This paper presents a new formulation of the artificial potential approach to the obstacle avoidance problem for a mobile robot or a manipulator in a known environment. Previous formulations of artificial potentials for obstacle avoidance have exhibited local minima in a cluttered environment. To build an artificial potential field, harmonic functions that completely eliminate local minima even for a cluttered environment are used. The panel method is employed to represent arbitrarily shaped obstacles and to derive the potential over the whole space. Based on this potential function, an elegant control strategy is proposed for the real-time control of a robot. The harmonic potential, the panel method, and the control strategy are tested with a bar-shaped mobile robot and a three-degree-of-freedom planar redundant manipulator.
Mi, Jian; Takahashi, Yasutake
2016-01-01
Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot. PMID:27483279
Mi, Jian; Takahashi, Yasutake
2016-07-29
Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot.
Optimal motion planning for collision avoidance of mobile robots in non-stationary environments
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1992-01-01
An optimal control formulation of the problem of collision avoidance of mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the minimum distance between the robot and the object is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. A perturbation control type of approach is used to update the optimal plan. Simulation results verify the value of the proposed strategy.
NASA Astrophysics Data System (ADS)
Dağlarli, Evren; Temeltaş, Hakan
2008-04-01
In this study, behavior generation and self-learning paradigms are investigated for the real-time applications of multi-goal mobile robot tasks. The method is capable to generate new behaviors and it combines them in order to achieve multi goal tasks. The proposed method is composed from three layers: Behavior Generating Module, Coordination Level and Emotion -Motivation Level. Last two levels use Hidden Markov models to manage dynamical structure of behaviors. The kinematics and dynamic model of the mobile robot with non-holonomic constraints are considered in the behavior based control architecture. The proposed method is tested on a four-wheel driven and four-wheel steered mobile robot with constraints in simulation environment and results are obtained successfully.
Ando, Noriyasu; Kanzaki, Ryohei
2017-09-01
The use of mobile robots is an effective method of validating sensory-motor models of animals in a real environment. The well-identified insect sensory-motor systems have been the major targets for modeling. Furthermore, mobile robots implemented with such insect models attract engineers who aim to avail advantages from organisms. However, directly comparing the robots with real insects is still difficult, even if we successfully model the biological systems, because of the physical differences between them. We developed a hybrid robot to bridge the gap. This hybrid robot is an insect-controlled robot, in which a tethered male silkmoth (Bombyx mori) drives the robot in order to localize an odor source. This robot has the following three advantages: 1) from a biomimetic perspective, the robot enables us to evaluate the potential performance of future insect-mimetic robots; 2) from a biological perspective, the robot enables us to manipulate the closed-loop of an onboard insect for further understanding of its sensory-motor system; and 3) the robot enables comparison with insect models as a reference biological system. In this paper, we review the recent works regarding insect-controlled robots and discuss the significance for both engineering and biology. Copyright © 2017 Elsevier Ltd. All rights reserved.
Development of Robotics Applications in a Solid Propellant Mixing Laboratory
1988-06-01
implementation of robotic hardware and software into a laboratory environment requires a carefully structured series of phases which examines, in...strategy. The general methodology utilized in this project is discussed in Appendix A. The proposed laboratory robotics development program was structured ...Accessibility - Potential modifications - Safety precautions e) Robot Transport - Slider mechanisms - Linear tracks - Gantry configuration - Mobility f
NASA Astrophysics Data System (ADS)
Watanabe, Tatsuhito; Katsura, Seiichiro
A person operating a mobile robot in a remote environment receives realistic visual feedback about the condition of the road on which the robot is moving. The categorization of the road condition is necessary to evaluate the conditions for safe and comfortable driving. For this purpose, the mobile robot should be capable of recognizing and classifying the condition of the road surfaces. This paper proposes a method for recognizing the type of road surfaces on the basis of the friction between the mobile robot and the road surfaces. This friction is estimated by a disturbance observer, and a support vector machine is used to classify the surfaces. The support vector machine identifies the type of the road surface using feature vector, which is determined using the arithmetic average and variance derived from the torque values. Further, these feature vectors are mapped onto a higher dimensional space by using a kernel function. The validity of the proposed method is confirmed by experimental results.
Path optimisation of a mobile robot using an artificial neural network controller
NASA Astrophysics Data System (ADS)
Singh, M. K.; Parhi, D. R.
2011-01-01
This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.
Counter tunnel exploration, mapping, and localization with an unmanned ground vehicle
NASA Astrophysics Data System (ADS)
Larson, Jacoby; Okorn, Brian; Pastore, Tracy; Hooper, David; Edwards, Jim
2014-06-01
Covert, cross-border tunnels are a security vulnerability that enables people and contraband to illegally enter the United States. All of these tunnels to-date have been constructed for the purpose of drug smuggling, but they may also be used to support terrorist activity. Past robotic tunnel exploration efforts have had limited success in aiding law enforcement to explore and map the suspect cross-border tunnels. These efforts have made use of adapted explosive ordnance disposal (EOD) or pipe inspection robotic systems that are not ideally suited to the cross-border tunnel environment. The Counter Tunnel project was sponsored by the Office of Secretary of Defense (OSD) Joint Ground Robotics Enterprise (JGRE) to develop a prototype robotic system for counter-tunnel operations, focusing on exploration, mapping, and characterization of tunnels. The purpose of this system is to provide a safe and effective solution for three-dimensional (3D) localization, mapping, and characterization of a tunnel environment. The system is composed of the robotic mobility platform, the mapping sensor payload, and the delivery apparatus. The system is able to deploy and retrieve the robotic mobility platform through a 20-cm-diameter borehole into the tunnel. This requirement posed many challenges in order to design and package the sensor and robotic system to fit through this narrow opening and be able to perform the mission. This paper provides a short description of a few aspects of the Counter Tunnel system such as mobility, perception, and localization, which were developed to meet the unique challenges required to access, explore, and map tunnel environments.
Application requirements for Robotic Nursing Assistants in hospital environments
NASA Astrophysics Data System (ADS)
Cremer, Sven; Doelling, Kris; Lundberg, Cody L.; McNair, Mike; Shin, Jeongsik; Popa, Dan
2016-05-01
In this paper we report on analysis toward identifying design requirements for an Adaptive Robotic Nursing Assistant (ARNA). Specifically, the paper focuses on application requirements for ARNA, envisioned as a mobile assistive robot that can navigate hospital environments to perform chores in roles such as patient sitter and patient walker. The role of a sitter is primarily related to patient observation from a distance, and fetching objects at the patient's request, while a walker provides physical assistance for ambulation and rehabilitation. The robot will be expected to not only understand nurse and patient intent but also close the decision loop by automating several routine tasks. As a result, the robot will be equipped with sensors such as distributed pressure sensitive skins, 3D range sensors, and so on. Modular sensor and actuator hardware configured in the form of several multi-degree-of-freedom manipulators, and a mobile base are expected to be deployed in reconfigurable platforms for physical assistance tasks. Furthermore, adaptive human-machine interfaces are expected to play a key role, as they directly impact the ability of robots to assist nurses in a dynamic and unstructured environment. This paper discusses required tasks for the ARNA robot, as well as sensors and software infrastructure to carry out those tasks in the aspects of technical resource availability, gaps, and needed experimental studies.
A global approach to kinematic path planning to robots with holonomic and nonholonomic constraints
NASA Technical Reports Server (NTRS)
Divelbiss, Adam; Seereeram, Sanjeev; Wen, John T.
1993-01-01
Robots in applications may be subject to holonomic or nonholonomic constraints. Examples of holonomic constraints include a manipulator constrained through the contact with the environment, e.g., inserting a part, turning a crank, etc., and multiple manipulators constrained through a common payload. Examples of nonholonomic constraints include no-slip constraints on mobile robot wheels, local normal rotation constraints for soft finger and rolling contacts in grasping, and conservation of angular momentum of in-orbit space robots. The above examples all involve equality constraints; in applications, there are usually additional inequality constraints such as robot joint limits, self collision and environment collision avoidance constraints, steering angle constraints in mobile robots, etc. The problem of finding a kinematically feasible path that satisfies a given set of holonomic and nonholonomic constraints, of both equality and inequality types is addressed. The path planning problem is first posed as a finite time nonlinear control problem. This problem is subsequently transformed to a static root finding problem in an augmented space which can then be iteratively solved. The algorithm has shown promising results in planning feasible paths for redundant arms satisfying Cartesian path following and goal endpoint specifications, and mobile vehicles with multiple trailers. In contrast to local approaches, this algorithm is less prone to problems such as singularities and local minima.
Controlling multiple security robots in a warehouse environment
NASA Technical Reports Server (NTRS)
Everett, H. R.; Gilbreath, G. A.; Heath-Pastore, T. A.; Laird, R. T.
1994-01-01
The Naval Command Control and Ocean Surveillance Center (NCCOSC) has developed an architecture to provide coordinated control of multiple autonomous vehicles from a single host console. The multiple robot host architecture (MRHA) is a distributed multiprocessing system that can be expanded to accommodate as many as 32 robots. The initial application will employ eight Cybermotion K2A Navmaster robots configured as remote security platforms in support of the Mobile Detection Assessment and Response System (MDARS) Program. This paper discusses developmental testing of the MRHA in an operational warehouse environment, with two actual and four simulated robotic platforms.
Mobile Robot Designed with Autonomous Navigation System
NASA Astrophysics Data System (ADS)
An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin
2017-10-01
With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
A Remote Lab for Experiments with a Team of Mobile Robots
Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio
2014-01-01
In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab. PMID:25192316
A remote lab for experiments with a team of mobile robots.
Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio
2014-09-04
In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab.
Visual environment recognition for robot path planning using template matched filters
NASA Astrophysics Data System (ADS)
Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto
2017-08-01
A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.
A Tree Based Self-routing Scheme for Mobility Support in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Kim, Young-Duk; Yang, Yeon-Mo; Kang, Won-Seok; Kim, Jin-Wook; An, Jinung
Recently, WSNs (Wireless Sensor Networks) with mobile robot is a growing technology that offer efficient communication services for anytime and anywhere applications. However, the tiny sensor node has very limited network resources due to its low battery power, low data rate, node mobility, and channel interference constraint between neighbors. Thus, in this paper, we proposed a tree based self-routing protocol for autonomous mobile robots based on beacon mode and implemented in real test-bed environments. The proposed scheme offers beacon based real-time scheduling for reliable association process between parent and child nodes. In addition, it supports smooth handover procedure by reducing flooding overhead of control packets. Throughout the performance evaluation by using a real test-bed system and simulation, we illustrate that our proposed scheme demonstrates promising performance for wireless sensor networks with mobile robots.
Novel graphical environment for virtual and real-world operations of tracked mobile manipulators
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.
1993-08-01
A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
NASA Astrophysics Data System (ADS)
Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki
2011-12-01
This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.
A salient region detection model combining background distribution measure for indoor robots.
Li, Na; Xu, Hui; Wang, Zhenhua; Sun, Lining; Chen, Guodong
2017-01-01
Vision system plays an important role in the field of indoor robot. Saliency detection methods, capturing regions that are perceived as important, are used to improve the performance of visual perception system. Most of state-of-the-art methods for saliency detection, performing outstandingly in natural images, cannot work in complicated indoor environment. Therefore, we propose a new method comprised of graph-based RGB-D segmentation, primary saliency measure, background distribution measure, and combination. Besides, region roundness is proposed to describe the compactness of a region to measure background distribution more robustly. To validate the proposed approach, eleven influential methods are compared on the DSD and ECSSD dataset. Moreover, we build a mobile robot platform for application in an actual environment, and design three different kinds of experimental constructions that are different viewpoints, illumination variations and partial occlusions. Experimental results demonstrate that our model outperforms existing methods and is useful for indoor mobile robots.
Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots
Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro
2013-01-01
This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933
Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.
Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro
2013-10-21
This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.
Robonaut Mobile Autonomy: Initial Experiments
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.
2006-01-01
A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Mobile Robotic Telepresence Solutions for the Education of Hospitalized Children.
Soares, Neelkamal; Kay, Jeffrey C; Craven, Geoff
2017-01-01
Hospitalization affects children's school attendance, resulting in poor academic and sociodevelopmental outcomes. The increasing ubiquity of mobile and tablet technology in educational and healthcare environments, and the growth of the mobile robotic telepresence (MRT) industry, offer opportunities for the use of MRT to connect hospitalized children to their school environments. This article describes an approach at one rural healthcare center in collaboration with local school districts with the aim of describing strategies and limitations of MRT use. Future research is needed on MRT implementation, from user experiences to operational strategies, and outcome metrics need to be developed to measure academic and socioemotional outcomes. By partnering with educational systems and using this technology, hospital information technology personnel can help hospitalized children engage with their school environments to maintain connections with peers and access academic instruction.
Scene analysis for a breadboard Mars robot functioning in an indoor environment
NASA Technical Reports Server (NTRS)
Levine, M. D.
1973-01-01
The problem is delt with of computer perception in an indoor laboratory environment containing rocks of various sizes. The sensory data processing is required for the NASA/JPL breadboard mobile robot that is a test system for an adaptive variably-autonomous vehicle that will conduct scientific explorations on the surface of Mars. Scene analysis is discussed in terms of object segmentation followed by feature extraction, which results in a representation of the scene in the robot's world model.
1990-09-01
maneuver in a cluttered indoor environment . Since Pluto could position itself in any orientation, it would also allow us to mount a robot arm on top of...reasons. First, it gives the payload a smoother ride: although the robot operates in an indoor environment , there are still cables and door thresholds to...form a self-holding circuit. A small DPDT relay powers the proper indicator light and is also 9 Batery en main Inlet O60A 24VDC Off Board On Board(NO
How do walkers avoid a mobile robot crossing their way?
Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien
2017-01-01
Robots and Humans have to share the same environment more and more often. In the aim of steering robots in a safe and convenient manner among humans it is required to understand how humans interact with them. This work focuses on collision avoidance between a human and a robot during locomotion. Having in mind previous results on human obstacle avoidance, as well as the description of the main principles which guide collision avoidance strategies, we observe how humans adapt a goal-directed locomotion task when they have to interfere with a mobile robot. Our results show differences in the strategy set by humans to avoid a robot in comparison with avoiding another human. Humans prefer to give the way to the robot even when they are likely to pass first at the beginning of the interaction. Copyright © 2016 Elsevier B.V. All rights reserved.
Adaptive Gait Control for a Quadruped Robot on 3D Path Planning
NASA Astrophysics Data System (ADS)
Igarashi, Hiroshi; Kakikura, Masayoshi
A legged walking robot is able to not only move on irregular terrain but also change its posture. For example, the robot can pass under overhead obstacles by crouching. The purpose of our research is to realize efficient path planning with a quadruped robot. Therefore, the path planning is expected to extended in three dimensions because of the mobility. However, some issues of the quadruped robot, which are instability, workspace limitation, deadlock and slippage, complicate realizing such application. In order to improve these issues and reinforce the mobility, a new static gait pattern for a quadruped robot, called TFG: Trajectory Following Gait, is proposed. The TFG intends to obtain high controllability like a wheel robot. Additionally, the TFG allows to change it posture during the walk. In this paper, some experimental results show that the TFG improves the issues and it is available for efficient locomotion in three dimensional environment.
Google glass-based remote control of a mobile robot
NASA Astrophysics Data System (ADS)
Yu, Song; Wen, Xi; Li, Wei; Chen, Genshe
2016-05-01
In this paper, we present an approach to remote control of a mobile robot via a Google Glass with the multi-function and compact size. This wearable device provides a new human-machine interface (HMI) to control a robot without need for a regular computer monitor because the Google Glass micro projector is able to display live videos around robot environments. In doing it, we first develop a protocol to establish WI-FI connection between Google Glass and a robot and then implement five types of robot behaviors: Moving Forward, Turning Left, Turning Right, Taking Pause, and Moving Backward, which are controlled by sliding and clicking the touchpad located on the right side of the temple. In order to demonstrate the effectiveness of the proposed Google Glass-based remote control system, we navigate a virtual Surveyor robot to pass a maze. Experimental results demonstrate that the proposed control system achieves the desired performance.
Global Coverage Measurement Planning Strategies for Mobile Robots Equipped with a Remote Gas Sensor
Arain, Muhammad Asif; Trincavelli, Marco; Cirillo, Marcello; Schaffernicht, Erik; Lilienthal, Achim J.
2015-01-01
The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions. PMID:25803707
Global coverage measurement planning strategies for mobile robots equipped with a remote gas sensor.
Arain, Muhammad Asif; Trincavelli, Marco; Cirillo, Marcello; Schaffernicht, Erik; Lilienthal, Achim J
2015-03-20
The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions.
Istepanian, R S H; Philip, N
2005-01-01
In this paper we describe some of the optimisation issues relevant to the requirements of high throughput of medical data and video streaming traffic in 3G wireless environments. In particular we present a challenging 3G mobile health care application that requires a demanding 3G medical data throughput. We also describe the 3G QoS requirement of mObile Tele-Echography ultra-Light rObot system (OTELO that is designed to provide seamless 3G connectivity for real-time ultrasound medical video streams and diagnosis from a remote site (robotic and patient station) manipulated by an expert side (specialists) that is controlling the robotic scanning operation and presenting a real-time feedback diagnosis using 3G wireless communication links.
Pyro: A Python-Based Versatile Programming Environment for Teaching Robotics
ERIC Educational Resources Information Center
Blank, Douglas; Kumar, Deepak; Meeden, Lisa; Yanco, Holly
2004-01-01
In this article we describe a programming framework called Pyro, which provides a set of abstractions that allows students to write platform-independent robot programs. This project is unique because of its focus on the pedagogical implications of teaching mobile robotics via a top-down approach. We describe the background of the project, its…
Laser-based pedestrian tracking in outdoor environments by multiple mobile robots.
Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko
2012-10-29
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures.
An overview on real-time control schemes for wheeled mobile robot
NASA Astrophysics Data System (ADS)
Radzak, M. S. A.; Ali, M. A. H.; Sha’amri, S.; Azwan, A. R.
2018-04-01
The purpose of this paper is to review real-time control motion algorithms for wheeled mobile robot (WMR) when navigating in environment such as road. Its need a good controller to avoid collision with any disturbance and maintain a track error at zero level. The controllers are used with other aiding sensors to measure the WMR’s velocities, posture, and interference to estimate the required torque to be applied on the wheels of mobile robot. Four main categories for wheeled mobile robot control systems have been found in literature which are namely: Kinematic based controller, Dynamic based controllers, artificial intelligence based control system, and Active Force control. A MATLAB/Simulink software is the main software to simulate and implement the control system. The real-time toolbox in MATLAB/SIMULINK are used to receive/send data from sensors/to actuator with presence of disturbances, however others software such C, C++ and visual basic are rare to be used.
Robot path planning algorithm based on symbolic tags in dynamic environment
NASA Astrophysics Data System (ADS)
Vokhmintsev, A.; Timchenko, M.; Melnikov, A.; Kozko, A.; Makovetskii, A.
2017-09-01
The present work will propose a new heuristic algorithms for path planning of a mobile robot in an unknown dynamic space that have theoretically approved estimates of computational complexity and are approbated for solving specific applied problems.
Vision robot with rotational camera for searching ID tags
NASA Astrophysics Data System (ADS)
Kimura, Nobutaka; Moriya, Toshio
2008-02-01
We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.
Intelligent lead: a novel HRI sensor for guide robots.
Cho, Keum-Bae; Lee, Beom-Hee
2012-01-01
This paper addresses the introduction of a new Human Robot Interaction (HRI) sensor for guide robots. Guide robots for geriatric patients or the visually impaired should follow user's control command, keeping a certain desired distance allowing the user to work freely. Therefore, it is necessary to acquire control commands and a user's position on a real-time basis. We suggest a new sensor fusion system to achieve this objective and we will call this sensor the "intelligent lead". The objective of the intelligent lead is to acquire a stable distance from the user to the robot, speed-control volume and turn-control volume, even when the robot platform with the intelligent lead is shaken on uneven ground. In this paper we explain a precise Extended Kalman Filter (EKF) procedure for this. The intelligent lead physically consists of a Kinect sensor, the serial linkage attached with eight rotary encoders, and an IMU (Inertial Measurement Unit) and their measurements are fused by the EKF. A mobile robot was designed to test the performance of the proposed sensor system. After installing the intelligent lead in the mobile robot, several tests are conducted to verify that the mobile robot with the intelligent lead is capable of achieving its goal points while maintaining the appropriate distance between the robot and the user. The results show that we can use the intelligent lead proposed in this paper as a new HRI sensor joined a joystick and a distance measure in the mobile environments such as the robot and the user are moving at the same time.
Visual terrain mapping for traversable path planning of mobile robots
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Amrani, Rachida; Tunstel, Edward W.
2004-10-01
In this paper, we have primarily discussed technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain visual clues. The Kalman Filtering technique is applied for aggregative fusion of sub-terrain assessment results. The last two terrain classifiers are shown to have remarkable capability for terrain traversability assessment of natural terrains. We have conducted a comparative performance evaluation of all three terrain classifiers and presented the results in this paper.
Functionalization of Tactile Sensation for Robot Based on Haptograph and Modal Decomposition
NASA Astrophysics Data System (ADS)
Yokokura, Yuki; Katsura, Seiichiro; Ohishi, Kiyoshi
In the real world, robots should be able to recognize the environment in order to be of help to humans. A video camera and a laser range finder are devices that can help robots recognize the environment. However, these devices cannot obtain tactile information from environments. Future human-assisting-robots should have the ability to recognize haptic signals, and a disturbance observer can possibly be used to provide the robot with this ability. In this study, a disturbance observer is employed in a mobile robot to functionalize the tactile sensation. This paper proposes a method that involves the use of haptograph and modal decomposition for the haptic recognition of road environments. The haptograph presents a graphic view of the tactile information. It is possible to classify road conditions intuitively. The robot controller is designed by considering the decoupled modal coordinate system, which consists of translational and rotational modes. Modal decomposition is performed by using a quarry matrix. Once the robot is provided with the ability to recognize tactile sensations, its usefulness to humans will increase.
Application of particle swarm optimization in path planning of mobile robot
NASA Astrophysics Data System (ADS)
Wang, Yong; Cai, Feng; Wang, Ying
2017-08-01
In order to realize the optimal path planning of mobile robot in unknown environment, a particle swarm optimization algorithm based on path length as fitness function is proposed. The location of the global optimal particle is determined by the minimum fitness value, and the robot moves along the points of the optimal particles to the target position. The process of moving to the target point is done with MATLAB R2014a. Compared with the standard particle swarm optimization algorithm, the simulation results show that this method can effectively avoid all obstacles and get the optimal path.
Design of a high-mobility multi-terrain robot based on eccentric paddle mechanism.
Sun, Yi; Yang, Yang; Ma, Shugen; Pu, Huayan
Gaining high mobility on versatile terrains is a crucial target for designing a mobile robot toward tasks such as search and rescue, scientific exploration, and environment monitoring. Inspired by dextrous limb motion of animals, a novel form of locomotion has been established in our previous study, by proposing an eccentric paddle mechanism (ePaddle) for integrating paddling motion into a traditional wheeled mechanism. In this paper, prototypes of an ePaddle mechanism and an ePaddle-based quadruped robot are presented. Several locomotion modes, including wheeled rolling, legged crawling, legged race-walking, rotational paddling, oscillating paddling, and paddle-aided rolling, are experimentally verified on testbeds with fabricated prototypes. Experimental results confirm that paddle's motion is useful in all the locomotion modes.
A Petri-net coordination model for an intelligent mobile robot
NASA Technical Reports Server (NTRS)
Wang, F.-Y.; Kyriakopoulos, K. J.; Tsolkas, A.; Saridis, G. N.
1990-01-01
The authors present a Petri net model of the coordination level of an intelligent mobile robot system (IMRS). The purpose of this model is to specify the integration of the individual efforts on path planning, supervisory motion control, and vision systems that are necessary for the autonomous operation of the mobile robot in a structured dynamic environment. This is achieved by analytically modeling the various units of the system as Petri net transducers and explicitly representing the task precedence and information dependence among them. The model can also be used to simulate the task processing and to evaluate the efficiency of operations and the responsibility of decisions in the coordination level of the IMRS. Some simulation results on the task processing and learning are presented.
Map generation in unknown environments by AUKF-SLAM using line segment-type and point-type landmarks
NASA Astrophysics Data System (ADS)
Nishihta, Sho; Maeyama, Shoichi; Watanebe, Keigo
2018-02-01
Recently, autonomous mobile robots that collect information at disaster sites are being developed. Since it is difficult to obtain maps in advance in disaster sites, the robots being capable of autonomous movement under unknown environments are required. For this objective, the robots have to build maps, as well as the estimation of self-location. This is called a SLAM problem. In particular, AUKF-SLAM which uses corners in the environment as point-type landmarks has been developed as a solution method so far. However, when the robots move in an environment like a corridor consisting of few point-type features, the accuracy of self-location estimated by the landmark is decreased and it causes some distortions in the map. In this research, we propose AUKF-SLAM which uses walls in the environment as a line segment-type landmark. We demonstrate that the robot can generate maps in unknown environment by AUKF-SLAM, using line segment-type and point-type landmarks.
Safe motion planning for mobile agents: A model of reactive planning for multiple mobile agents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujimura, Kikuo.
1990-01-01
The problem of motion planning for multiple mobile agents is studied. Each planning agent independently plans its own action based on its map which contains a limited information about the environment. In an environment where more than one mobile agent interacts, the motions of the robots are uncertain and dynamic. A model for reactive agents is described and simulation results are presented to show their behavior patterns. 18 refs., 2 figs.
Human guidance of mobile robots in complex 3D environments using smart glasses
NASA Astrophysics Data System (ADS)
Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel
2016-05-01
In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.
Model Predictive Control considering Reachable Range of Wheels for Leg / Wheel Mobile Robots
NASA Astrophysics Data System (ADS)
Suzuki, Naito; Nonaka, Kenichiro; Sekiguchi, Kazuma
2016-09-01
Obstacle avoidance is one of the important tasks for mobile robots. In this paper, we study obstacle avoidance control for mobile robots equipped with four legs comprised of three DoF SCARA leg/wheel mechanism, which enables the robot to change its shape adapting to environments. Our previous method achieves obstacle avoidance by model predictive control (MPC) considering obstacle size and lateral wheel positions. However, this method does not ensure existence of joint angles which achieves reference wheel positions calculated by MPC. In this study, we propose a model predictive control considering reachable mobile ranges of wheels positions by combining multiple linear constraints, where each reachable mobile range is approximated as a convex trapezoid. Thus, we achieve to formulate a MPC as a quadratic problem with linear constraints for nonlinear problem of longitudinal and lateral wheel position control. By optimization of MPC, the reference wheel positions are calculated, while each joint angle is determined by inverse kinematics. Considering reachable mobile ranges explicitly, the optimal joint angles are calculated, which enables wheels to reach the reference wheel positions. We verify its advantages by comparing the proposed method with the previous method through numerical simulations.
Environment exploration and SLAM experiment research based on ROS
NASA Astrophysics Data System (ADS)
Li, Zhize; Zheng, Wei
2017-11-01
Robots need to get the information of surrounding environment by means of map learning. SLAM or navigation based on mobile robots is developing rapidly. ROS (Robot Operating System) is widely used in the field of robots because of the convenient code reuse and open source. Numerous excellent algorithms of SLAM or navigation are ported to ROS package. hector_slam is one of them that can set up occupancy grid maps on-line fast with low computation resources requiring. Its characters above make the embedded handheld mapping system possible. Similarly, hector_navigation also does well in the navigation field. It can finish path planning and environment exploration by itself using only an environmental sensor. Combining hector_navigation with hector_slam can realize low cost environment exploration, path planning and slam at the same time
ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
1995-02-01
This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since suchmore » cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.« less
Spletzer, Barry L.; Fischer, Gary J.; Marron, Lisa C.; Martinez, Michael A.; Kuehl, Michael A.; Feddema, John T.
2001-01-01
The present invention provides a hopping robot that includes a misfire tolerant linear actuator suitable for long trips, low energy steering and control, reliable low energy righting, miniature low energy fuel control. The present invention provides a robot with hopping mobility, capable of traversing obstacles significant in size relative to the robot and capable of operation on unpredictable terrain over long range. The present invention further provides a hopping robot with misfire-tolerant combustion actuation, and with combustion actuation suitable for use in oxygen-poor environments.
Mobile robots traversability awareness based on terrain visual sensory data fusion
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir
2007-04-01
In this paper, we have presented methods that significantly improve the robot awareness of its terrain traversability conditions. The terrain traversability awareness is achieved by association of terrain image appearances from different poses and fusion of extracted information from multimodality imaging and range sensor data for localization and clustering environment landmarks. Initially, we describe methods for extraction of salient features of the terrain for the purpose of landmarks registration from two or more images taken from different via points along the trajectory path of the robot. The method of image registration is applied as a means of overlaying (two or more) of the same terrain scene at different viewpoints. The registration geometrically aligns salient landmarks of two images (the reference and sensed images). A Similarity matching techniques is proposed for matching the terrain salient landmarks. Secondly, we present three terrain classifier models based on rule-based, supervised neural network, and fuzzy logic for classification of terrain condition under uncertainty and mapping the robot's terrain perception to apt traversability measures. This paper addresses the technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain spatial and textural cues.
Hernandez Bennetts, Victor; Lilienthal, Achim J; Neumann, Patrick P; Trincavelli, Marco
2011-01-01
Roboticists often take inspiration from animals for designing sensors, actuators, or algorithms that control the behavior of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odor plume. In particular the task of tracking an odor plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behavior of moths, dung beetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro-drone) in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behavior of animals, can be usefully "translated" into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically inspired algorithms.
Hernandez Bennetts, Victor; Lilienthal, Achim J.; Neumann, Patrick P.; Trincavelli, Marco
2011-01-01
Roboticists often take inspiration from animals for designing sensors, actuators, or algorithms that control the behavior of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odor plume. In particular the task of tracking an odor plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behavior of moths, dung beetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro-drone) in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behavior of animals, can be usefully “translated” into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically inspired algorithms. PMID:22319493
ERIC Educational Resources Information Center
Sugimoto, Masanori
2011-01-01
This paper describes a system called GENTORO that uses a robot and a handheld projector for supporting children's storytelling activities. GENTORO differs from many existing systems in that children can make a robot play their own story in a physical space augmented by mixed-reality technologies. Pilot studies have been conducted to clarify the…
Mobile Robot Navigation and Obstacle Avoidance in Unstructured Outdoor Environments
2017-12-01
to pull information from the network, it subscribes to a specific topic and is able to receive the messages that are published to that topic. In order...total artificial potential field is characterized “as the sum of an attractive potential pulling the robot toward the goal…and a repulsive potential...of robot laser_max = 20; % robot laser view horizon goaldist = 0.5; % distance metric for reaching goal goali = 1
Astrobee: Space Station Robotic Free Flyer
NASA Technical Reports Server (NTRS)
Provencher, Chris; Bualat, Maria G.; Barlow, Jonathan; Fong, Terrence W.; Smith, Marion F.; Smith, Ernest E.; Sanchez, Hugo S.
2016-01-01
Astrobee is a free flying robot that will fly inside the International Space Station and primarily serve as a research platform for robotics in zero gravity. Astrobee will also provide mobile camera views to ISS flight and payload controllers, and collect various sensor data within the ISS environment for the ISS Program. Astrobee consists of two free flying robots, a dock, and ground data system. This presentation provides an overview, high level design description, and project status.
Easy robot programming for beginners and kids using augmented reality environments
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Nishiguchi, Masahiro
2010-11-01
The authors have developed the mobile robot which can be programmed by command and instruction cards. All you have to do is to arrange cards on a table and to shot the programming stage by a camera. Our card programming system recognizes instruction cards and translates icon commands into the motor driver program. This card programming environment also provides low-level structure programming.
Development of dog-like retrieving capability in a ground robot
NASA Astrophysics Data System (ADS)
MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary
2013-01-01
This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.
Optimal path planning for a mobile robot using cuckoo search algorithm
NASA Astrophysics Data System (ADS)
Mohanty, Prases K.; Parhi, Dayal R.
2016-03-01
The shortest/optimal path planning is essential for efficient operation of autonomous vehicles. In this article, a new nature-inspired meta-heuristic algorithm has been applied for mobile robot path planning in an unknown or partially known environment populated by a variety of static obstacles. This meta-heuristic algorithm is based on the levy flight behaviour and brood parasitic behaviour of cuckoos. A new objective function has been formulated between the robots and the target and obstacles, which satisfied the conditions of obstacle avoidance and target-seeking behaviour of robots present in the terrain. Depending upon the objective function value of each nest (cuckoo) in the swarm, the robot avoids obstacles and proceeds towards the target. The smooth optimal trajectory is framed with this algorithm when the robot reaches its goal. Some simulation and experimental results are presented at the end of the paper to show the effectiveness of the proposed navigational controller.
Learning classifier systems for single and multiple mobile robots in unstructured environments
NASA Astrophysics Data System (ADS)
Bay, John S.
1995-12-01
The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.
Control of an automated mobile manipulator using artificial immune system
NASA Astrophysics Data System (ADS)
Deepak, B. B. V. L.; Parhi, Dayal R.
2016-03-01
This paper addresses the coordination and control of a wheeled mobile manipulator (WMM) using artificial immune system. The aim of the developed methodology is to navigate the system autonomously and transport jobs and tools in manufacturing environments. This study integrates the kinematic structures of a four-axis manipulator and a differential wheeled mobile platform. The motion of the developed WMM is controlled by the complete system of parametric equation in terms of joint velocities and makes the robot to follow desired trajectories by the manipulator and platform within its workspace. The developed robot system performs its action intelligently according to the sensed environmental criteria within its search space. To verify the effectiveness of the proposed immune-based motion planner for WMM, simulations as well as experimental results are presented in various unknown environments.
Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.; Byrne, Raymond H.
2004-02-03
A miniature mobile robot provides a relatively inexpensive mobile robot. A mobile robot for searching an area provides a way for multiple mobile robots in cooperating teams. A robotic system with a team of mobile robots communicating information among each other provides a way to locate a source in cooperation. A mobile robot with a sensor, a communication system, and a processor, provides a way to execute a strategy for searching an area.
Searching Dynamic Agents with a Team of Mobile Robots
Juliá, Miguel; Gil, Arturo; Reinoso, Oscar
2012-01-01
This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach. PMID:23012519
Searching dynamic agents with a team of mobile robots.
Juliá, Miguel; Gil, Arturo; Reinoso, Oscar
2012-01-01
This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach.
Vision-based mapping with cooperative robots
NASA Astrophysics Data System (ADS)
Little, James J.; Jennings, Cullen; Murray, Don
1998-10-01
Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.
Event detection and localization for small mobile robots using reservoir computing.
Antonelo, E A; Schrauwen, B; Stroobandt, D
2008-08-01
Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the input stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.
Airborne Chemical Sensing with Mobile Robots
Lilienthal, Achim J.; Loutfi, Amy; Duckett, Tom
2006-01-01
Airborne chemical sensing with mobile robots has been an active research area since the beginning of the 1990s. This article presents a review of research work in this field, including gas distribution mapping, trail guidance, and the different subtasks of gas source localisation. Due to the difficulty of modelling gas distribution in a real world environment with currently available simulation techniques, we focus largely on experimental work and do not consider publications that are purely based on simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harber, K.S.; Pin, F.G.
1990-03-01
The US DOE Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) and the Commissariat a l'Energie Atomique's (CEA) Office de Robotique et Productique within the Directorat a la Valorization are working toward a long-term cooperative agreement and relationship in the area of Intelligent Systems Research (ISR). This report presents the proceedings of the first CESAR/CEA Workshop on Autonomous Mobile Robots which took place at ORNL on May 30, 31 and June 1, 1989. The purpose of the workshop was to present and discuss methodologies and algorithms under development at the two facilities in themore » area of perception and navigation for autonomous mobile robots in unstructured environments. Experimental demonstration of the algorithms and comparison of some of their features were proposed to take place within the framework of a previously mutually agreed-upon demonstration scenario or base-case.'' The base-case scenario described in detail in Appendix A, involved autonomous navigation by the robot in an a priori unknown environment with dynamic obstacles, in order to reach a predetermined goal. From the intermediate goal location, the robot had to search for and locate a control panel, move toward it, and dock in front of the panel face. The CESAR demonstration was successfully accomplished using the HERMIES-IIB robot while subsets of the CEA demonstration performed using the ARES robot simulation and animation system were presented. The first session of the workshop focused on these experimental demonstrations and on the needs and considerations for establishing benchmarks'' for testing autonomous robot control algorithms.« less
ERIC Educational Resources Information Center
Kim, Yanghee; Smith, Diantha
2017-01-01
The ubiquity and educational potential of mobile applications are well acknowledged. This paper proposes six theory-based, pedagogical strategies to guide interaction design of mobile apps for young children. Also, to augment the capabilities of mobile devices, we used a humanoid robot integrated with a smartphone and developed an English-learning…
Usability testing of a mobile robotic system for in-home telerehabilitation.
Boissy, Patrick; Brière, Simon; Corriveau, Hélène; Grant, Andrew; Lauria, Michel; Michaud, François
2011-01-01
Mobile robots designed to enhance telepresence in the support of telehealth services are being considered for numerous applications. TELEROBOT is a teleoperated mobile robotic platform equipped with videoconferencingcapabilities and designed to be used in a home environment to. In this study, learnability of the system's teleoperation interface and controls was evaluated with ten rehabilitation professionals during four training sessions in a laboratory environment and in an unknown home environment while performing the execution of a standardized evaluation protocol typically used in home care. Results show that the novice teleoperators' performances on two of the four metrics used (number of command and total time) improved significantly across training sessions (ANOVAS, p<0.05) and that performance in these metrics in the last training session reflected teleoperation abilities seen in the unknown home environment during navigation tasks (r=0,77 and 0,60). With only 4 hours of training, rehabilitation professionals were able learn to teleoperate successfully TELEROBOT. However teleoperation performances remained significantly less efficient then those of an expert. Under the home task condition (navigating the home environment from one point to the other as fast as possible) this translated to completion time between 350 seconds (best performance) and 850 seconds (worse performance). Improvements in other usability aspects of the system will be needed to meet the requirements of in-home telerehabilitation.
The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration
NASA Astrophysics Data System (ADS)
Zhao, Ming; Han, Baoling
2016-11-01
The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.
Flexible Virtual Structure Consideration in Dynamic Modeling of Mobile Robots Formation
NASA Astrophysics Data System (ADS)
El Kamel, A. Essghaier; Beji, L.; Lerbet, J.; Abichou, A.
2009-03-01
In cooperative mobile robotics, we look for formation keeping and maintenance of a geometric configuration during movement. As a solution to these problems, the concept of a virtual structure is considered. Based on this idea, we have developed an efficient flexible virtual structure, describing the dynamic model of n vehicles in formation and where the whole formation is kept dependant. Notes that, for 2D and 3D space navigation, only a rigid virtual structure was proposed in the literature. Further, the problem was limited to a kinematic behavior of the structure. Hence, the flexible virtual structure in dynamic modeling of mobile robots formation presented in this paper, gives more capabilities to the formation to avoid obstacles in hostile environment while keeping formation and avoiding inter-agent collision.
Mobile robotic sensors for perimeter detection and tracking.
Clark, Justin; Fierro, Rafael
2007-02-01
Mobile robot/sensor networks have emerged as tools for environmental monitoring, search and rescue, exploration and mapping, evaluation of civil infrastructure, and military operations. These networks consist of many sensors each equipped with embedded processors, wireless communication, and motion capabilities. This paper describes a cooperative mobile robot network capable of detecting and tracking a perimeter defined by a certain substance (e.g., a chemical spill) in the environment. Specifically, the contributions of this paper are twofold: (i) a library of simple reactive motion control algorithms and (ii) a coordination mechanism for effectively carrying out perimeter-sensing missions. The decentralized nature of the methodology implemented could potentially allow the network to scale to many sensors and to reconfigure when adding/deleting sensors. Extensive simulation results and experiments verify the validity of the proposed cooperative control scheme.
Toward controlling perturbations in robotic sensor networks
NASA Astrophysics Data System (ADS)
Banerjee, Ashis G.; Majumder, Saikat R.
2014-06-01
Robotic sensor networks (RSNs), which consist of networks of sensors placed on mobile robots, are being increasingly used for environment monitoring applications. In particular, a lot of work has been done on simultaneous localization and mapping of the robots, and optimal sensor placement for environment state estimation1. The deployment of RSNs, however, remains challenging in harsh environments where the RSNs have to deal with significant perturbations in the forms of wind gusts, turbulent water flows, sand storms, or blizzards that disrupt inter-robot communication and individual robot stability. Hence, there is a need to be able to control such perturbations and bring the networks to desirable states with stable nodes (robots) and minimal operational performance (environment sensing). Recent work has demonstrated the feasibility of controlling the non-linear dynamics in other communication networks like emergency management systems and power grids by introducing compensatory perturbations to restore network stability and operation2. In this paper, we develop a computational framework to investigate the usefulness of this approach for RSNs in marine environments. Preliminary analysis shows promising performance and identifies bounds on the original perturbations within which it is possible to control the networks.
NASA Astrophysics Data System (ADS)
Hanford, Scott D.
Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.
Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manges, W.W.; Hamel, W.R.; Weisbin, C.R.
1988-01-01
The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less
Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots
Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko
2012-01-01
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. PMID:23202171
Mobile Robot for Exploring Cold Liquid/Solid Environments
NASA Technical Reports Server (NTRS)
Bergh, Charles; Zimmerman, Wayne
2006-01-01
The Planetary Autonomous Amphibious Robotic Vehicle (PAARV), now at the prototype stage of development, was originally intended for use in acquiring and analyzing samples of solid, liquid, and gaseous materials in cold environments on the shores and surfaces, and at shallow depths below the surfaces, of lakes and oceans on remote planets. The PAARV also could be adapted for use on Earth in similar exploration of cold environments in and near Arctic and Antarctic oceans and glacial and sub-glacial lakes.
AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program
NASA Astrophysics Data System (ADS)
Gothard, Benny M.
2002-02-01
One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.
Designing a Self-Stabilizing Robot for Dynamic Mobile Manipulation
2006-01-01
Designing a Self-Stabilizing Robot For Dynamic Mobile Manipulation Patrick Deegan Bryan J. Thibodeau Roderic Grupen Laboratory for Perceptual... Craig and the Modified D-H standard[14]. Fig. 9. Increase in forces that can be applied to the environment using whole body postural control, for an end...4. This work was supported by NASA grant NNJ05HB61A-5710001842 and ARO grant W911NF-05-1- 0396. REFERENCES [1] B. J. Thibodeau, P. Deegan , and R
Biomorphic Explorers Leading Towards a Robotic Ecology
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Miralles, Carlos; Chao, Tien-Hsin
1999-01-01
This paper presents viewgraphs of biomorphic explorers as they provide extended survival and useful life of robots in ecology. The topics include: 1) Biomorphic Explorers; 2) Advanced Mobility for Biomorphic Explorers; 3) Biomorphic Explorers: Size Based Classification; 4) Biomorphic Explorers: Classification (Based on Mobility and Ambient Environment); 5) Biomorphic Flight Systems: Vision; 6) Biomorphic Glider Deployment Concept: Larger Glider Deploy/Local Relay; 7) Biomorphic Glider Deployment Concept: Balloon Deploy/Dual Relay; 8) Biomorphic Exlplorer: Conceptual Design; 9) Biomorphic Gliders; and 10) Applications.
Quantum robots plus environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.
1998-07-23
A quantum robot is a mobile quantum system, including an on board quantum computer and needed ancillary systems, that interacts with an environment of quantum systems. Quantum robots carry out tasks whose goals include making specified changes in the state of the environment or carrying out measurements on the environment. The environments considered so far, oracles, data bases, and quantum registers, are seen to be special cases of environments considered here. It is also seen that a quantum robot should include a quantum computer and cannot be simply a multistate head. A model of quantum robots and their interactions ismore » discussed in which each task, as a sequence of alternating computation and action phases,is described by a unitary single time step operator T {approx} T{sub a} + T{sub c} (discrete space and time are assumed). The overall system dynamics is described as a sum over paths of completed computation (T{sub c}) and action (T{sub a}) phases. A simple example of a task, measuring the distance between the quantum robot and a particle on a 1D lattice with quantum phase path dispersion present, is analyzed. A decision diagram for the task is presented and analyzed.« less
Experiments in autonomous robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamel, W.R.
1987-01-01
The Center for Engineering Systems Advanced Research (CESAR) is performing basic research in autonomous robotics for energy-related applications in hazardous environments. The CESAR research agenda includes a strong experimental component to assure practical evaluation of new concepts and theories. An evolutionary sequence of mobile research robots has been planned to support research in robot navigation, world sensing, and object manipulation. A number of experiments have been performed in studying robot navigation and path planning with planar sonar sensing. Future experiments will address more complex tasks involving three-dimensional sensing, dexterous manipulation, and human-scale operations.
A Search-and-Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment
Gao, Junyao; Zhao, Fangzhou; Liu, Yi
2017-01-01
This paper introduces a search-and-rescue robot system used for remote sensing of the underground coal mine environment, which is composed of an operating control unit and two mobile robots with explosion-proof and waterproof function. This robot system is designed to observe and collect information of the coal mine environment through remote control. Thus, this system can be regarded as a multifunction sensor, which realizes remote sensing. When the robot system detects danger, it will send out signals to warn rescuers to keep away. The robot consists of two gas sensors, two cameras, a two-way audio, a 1 km-long fiber-optic cable for communication and a mechanical explosion-proof manipulator. Especially, the manipulator is a novel explosion-proof manipulator for cleaning obstacles, which has 3-degree-of-freedom, but is driven by two motors. Furthermore, the two robots can communicate in series for 2 km with the operating control unit. The development of the robot system may provide a reference for developing future search-and-rescue systems. PMID:29065560
Some foundational aspects of quantum computers and quantum robots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.; Physics
1998-01-01
This paper addresses foundational issues related to quantum computing. The need for a universally valid theory such as quantum mechanics to describe to some extent its own validation is noted. This includes quantum mechanical descriptions of systems that do theoretical calculations (i.e. quantum computers) and systems that perform experiments. Quantum robots interacting with an environment are a small first step in this direction. Quantum robots are described here as mobile quantum systems with on-board quantum computers that interact with environments. Included are discussions on the carrying out of tasks and the division of tasks into computation and action phases. Specificmore » models based on quantum Turing machines are described. Differences and similarities between quantum robots plus environments and quantum computers are discussed.« less
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-27
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-01
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282
A natural-language interface to a mobile robot
NASA Technical Reports Server (NTRS)
Michalowski, S.; Crangle, C.; Liang, L.
1987-01-01
The present work on robot instructability is based on an ongoing effort to apply modern manipulation technology to serve the needs of the handicapped. The Stanford/VA Robotic Aid is a mobile manipulation system that is being developed to assist severely disabled persons (quadriplegics) in performing simple activities of everyday living in a homelike, unstructured environment. It consists of two major components: a nine degree-of-freedom manipulator and a stationary control console. In the work presented here, only the motions of the Robotic Aid's omnidirectional motion base have been considered, i.e., the six degrees of freedom of the arm and gripper have been ignored. The goal has been to develop some basic software tools for commanding the robot's motions in an enclosed room containing a few objects such as tables, chairs, and rugs. In the present work, the environmental model takes the form of a two-dimensional map with objects represented by polygons. Admittedly, such a highly simplified scheme bears little resemblance to the elaborate cognitive models of reality that are used in normal human discourse. In particular, the polygonal model is given a priori and does not contain any perceptual elements: there is no polygon sensor on board the mobile robot.
Tracked robot controllers for climbing obstacles autonomously
NASA Astrophysics Data System (ADS)
Vincent, Isabelle
2009-05-01
Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.
Robots and Humans in Planetary Exploration: Working Together?
NASA Technical Reports Server (NTRS)
Landis, Geoffrey A.; Lyons, Valerie (Technical Monitor)
2002-01-01
Today's approach to human-robotic cooperation in planetary exploration focuses on using robotic probes as precursors to human exploration. A large portion of current NASA planetary surface exploration is focussed on Mars, and robotic probes are seen as precursors to human exploration in: Learning about operation and mobility on Mars; Learning about the environment of Mars; Mapping the planet and selecting landing sites for human mission; Demonstration of critical technology; Manufacture fuel before human presence, and emplace elements of human-support infrastructure
A simple highly efficient non invasive EMG-based HMI.
Vitiello, N; Olcese, U; Oddo, C M; Carpaneto, J; Micera, S; Carrozza, M C; Dario, P
2006-01-01
Muscle activity recorded non-invasively is sufficient to control a mobile robot if it is used in combination with an algorithm for its asynchronous analysis. In this paper, we show that several subjects successfully can control the movements of a robot in a structured environment made up of six rooms by contracting two different muscles using a simple algorithm. After a small training period, subjects were able to control the robot with performances comparable to those achieved manually controlling the robot.
Navigation system for autonomous mapper robots
NASA Astrophysics Data System (ADS)
Halbach, Marc; Baudoin, Yvan
1993-05-01
This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.
Mobile robot dynamic path planning based on improved genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, Yong; Zhou, Heng; Wang, Ying
2017-08-01
In dynamic unknown environment, the dynamic path planning of mobile robots is a difficult problem. In this paper, a dynamic path planning method based on genetic algorithm is proposed, and a reward value model is designed to estimate the probability of dynamic obstacles on the path, and the reward value function is applied to the genetic algorithm. Unique coding techniques reduce the computational complexity of the algorithm. The fitness function of the genetic algorithm fully considers three factors: the security of the path, the shortest distance of the path and the reward value of the path. The simulation results show that the proposed genetic algorithm is efficient in all kinds of complex dynamic environments.
A Mobile Robot for Remote Response to Incidents Involving Hazardous Materials
NASA Technical Reports Server (NTRS)
Welch, Richard V.
1994-01-01
This paper will describe a teleoperated mobile robot system being developed at JPL for use by the JPL Fire Department/HAZMAT Team. The project, which began in October 1990, is focused on prototyping a robotic vehicle which can be quickly deployed and easily operated by HAZMAT Team personnel allowing remote entry and exploration of a hazardous material incident site. The close involvement of JPL Fire Department personnel has been critical in establishing system requirements as well as evaluating the system. The current robot, called HAZBOT III, has been especially designed for operation in environments that may contain combustible gases. Testing of the system with the Fire Department has shown that teleoperated robots can successfully gain access to incident sites allowing hazardous material spills to be remotely located and identified. Work is continuing to enable more complex missions through enhancement of the operator interface and by allowing tetherless operation.
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.
Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu
2015-08-01
This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.
Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation
Masmoudi, Mohamed Slim; Masmoudi, Mohamed
2016-01-01
This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path. PMID:27688748
A two-class self-paced BCI to control a robot in four directions.
Ron-Angevin, Ricardo; Velasco-Alvarez, Francisco; Sancha-Ros, Salvador; da Silva-Sauer, Leandro
2011-01-01
In this work, an electroencephalographic analysis-based, self-paced (asynchronous) brain-computer interface (BCI) is proposed to control a mobile robot using four different navigation commands: turn right, turn left, move forward and move back. In order to reduce the probability of misclassification, the BCI is to be controlled with only two mental tasks (relaxed state versus imagination of right hand movements), using an audio-cued interface. Four healthy subjects participated in the experiment. After two sessions controlling a simulated robot in a virtual environment (which allowed the user to become familiar with the interface), three subjects successfully moved the robot in a real environment. The obtained results show that the proposed interface enables control over the robot, even for subjects with low BCI performance. © 2011 IEEE
A review of physical security robotics at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roerig, S.C.
1990-01-01
As an outgrowth of research into physical security technologies, Sandia is investigating the role of robotics in security systems. Robotics may allow more effective utilization of guard forces, especially in scenarios where personnel would be exposed to harmful environments. Robots can provide intrusion detection and assessment functions for failed sensors or transient assets, can test existing fixed site sensors, and can gather additional intelligence and dispense delaying elements. The Robotic Security Vehicle (RSV) program for DOE/OSS is developing a fieldable prototype for an exterior physical security robot based upon a commercial four wheel drive vehicle. The RSV will be capablemore » of driving itself, being driven remotely, or being driven by an onboard operator around a site and will utilize its sensors to alert an operator to unusual conditions. The Remote Security Station (RSS) program for the Defense Nuclear Agency is developing a proof-of-principle robotic system which will be used to evaluate the role, and associated cost, of robotic technologies in exterior security systems. The RSS consists of an independent sensor pod, a mobile sensor platform and a control and display console. Sensor data fusion is used to optimize the system's intrusion detection performance. These programs are complementary, the RSV concentrates on developing autonomous mobility, while the RSS thrust is on mobile sensor employment. 3 figs.« less
Testbed for remote telepresence research
NASA Astrophysics Data System (ADS)
Adnan, Sarmad; Cheatham, John B., Jr.
1992-11-01
Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.
Wearable computer for mobile augmented-reality-based controlling of an intelligent robot
NASA Astrophysics Data System (ADS)
Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino
2000-10-01
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots
Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im
2017-01-01
Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed. PMID:29186843
A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots.
Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im
2017-11-25
Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.
Using articulated scene models for dynamic 3d scene analysis in vista spaces
NASA Astrophysics Data System (ADS)
Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven
2010-09-01
In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.
ERIC Educational Resources Information Center
Mioduser, David; Levy, Sharona T.
2010-01-01
This study explores young children's ability to construct and explain adaptive behaviors of a behaving artifact, an autonomous mobile robot with sensors. A central component of the behavior construction environment is the RoboGan software that supports children's construction of spatiotemporal events with an a-temporal rule structure. Six…
Two modular neuro-fuzzy system for mobile robot navigation
NASA Astrophysics Data System (ADS)
Bobyr, M. V.; Titov, V. S.; Kulabukhov, S. A.; Syryamkin, V. I.
2018-05-01
The article considers the fuzzy model for navigation of a mobile robot operating in two modes. In the first mode the mobile robot moves along a line. In the second mode, the mobile robot looks for an target in unknown space. Structural and schematic circuit of four-wheels mobile robot are presented in the article. The article describes the movement of a mobile robot based on two modular neuro-fuzzy system. The algorithm of neuro-fuzzy inference used in two modular control system for movement of a mobile robot is given in the article. The experimental model of the mobile robot and the simulation of the neuro-fuzzy algorithm used for its control are presented in the article.
Beyond adaptive-critic creative learning for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Liao, Xiaoqun; Cao, Ming; Hall, Ernest L.
2001-10-01
Intelligent industrial and mobile robots may be considered proven technology in structured environments. Teach programming and supervised learning methods permit solutions to a variety of applications. However, we believe that to extend the operation of these machines to more unstructured environments requires a new learning method. Both unsupervised learning and reinforcement learning are potential candidates for these new tasks. The adaptive critic method has been shown to provide useful approximations or even optimal control policies to non-linear systems. The purpose of this paper is to explore the use of new learning methods that goes beyond the adaptive critic method for unstructured environments. The adaptive critic is a form of reinforcement learning. A critic element provides only high level grading corrections to a cognition module that controls the action module. In the proposed system the critic's grades are modeled and forecasted, so that an anticipated set of sub-grades are available to the cognition model. The forecasting grades are interpolated and are available on the time scale needed by the action model. The success of the system is highly dependent on the accuracy of the forecasted grades and adaptability of the action module. Examples from the guidance of a mobile robot are provided to illustrate the method for simple line following and for the more complex navigation and control in an unstructured environment. The theory presented that is beyond the adaptive critic may be called creative theory. Creative theory is a form of learning that models the highest level of human learning - imagination. The application of the creative theory appears to not only be to mobile robots but also to many other forms of human endeavor such as educational learning and business forecasting. Reinforcement learning such as the adaptive critic may be applied to known problems to aid in the discovery of their solutions. The significance of creative theory is that it permits the discovery of the unknown problems, ones that are not yet recognized but may be critical to survival or success.
Modeling, validation and analysis of a Whegs robot in the USARSim environment
NASA Astrophysics Data System (ADS)
Taylor, Brian K.; Balakirsky, Stephen; Messina, Elena; Quinn, Roger D.
2008-04-01
Simulation of robots in a virtual domain has multiple benefits. End users can use the simulation as a training tool to increase their skill with the vehicle without risking damage to the robot or surrounding environment. Simulation allows researchers and developers to benchmark robot performance in a range of scenarios without having the physical robot or environment present. The simulation can also help guide and generate new design concepts. USARSim (Unified System for Automation and Robot Simulation) is a tool that is being used to accomplish these goals, particularly within the realm of search and rescue. It is based on the Unreal Tournament 2004 gaming engine, which approximates the physics of how a robot interacts with its environment. A family of vehicles that can benefit from simulation in USARSim are Whegs TM robots. Developed in the Biorobotics Laboratory at Case Western Reserve University, Whegs TM robots are highly mobile ground vehicles that use abstracted biological principles to achieve a robust level of locomotion, including passive gait adaptation and enhanced climbing abilities. This paper describes a Whegs TM robot model that was constructed in USARSim. The model was configured with the same kinds of behavioral characteristics found in real Whegs TM vehicles. Once these traits were implemented, a validation study was performed using identical performance metrics measured on both the virtual and real vehicles to quantify vehicle performance and to ensure that the virtual robot's performance matched that of the real robot.
Mixed-Initiative Human-Robot Interaction: Definition, Taxonomy, and Survey
2015-01-01
response situations (i.e., harmful for human lives) that range from natural disasters (e.g., Fukushima nuclear plant meltdown [1]) to terrorist attacks... Fukushima Daiichi Nuclear Power Plants using mobile rescue robots," Journal of Field Robotics, vol. 30, pp. 44-63, 2013. [2] A. Davids, "Urban search...operating environment can be uncertain, unstructured, and hostile. The damaged Fukushima nuclear plant‟s high radiation level not only posed danger to
Multi-Sensor Person Following in Low-Visibility Scenarios
Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier
2010-01-01
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. PMID:22163506
Multi-sensor person following in low-visibility scenarios.
Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier
2010-01-01
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment.
Small-scale soft-bodied robot with multimodal locomotion.
Hu, Wenqi; Lum, Guo Zhan; Mastrangeli, Massimo; Sitti, Metin
2018-02-01
Untethered small-scale (from several millimetres down to a few micrometres in all dimensions) robots that can non-invasively access confined, enclosed spaces may enable applications in microfactories such as the construction of tissue scaffolds by robotic assembly, in bioengineering such as single-cell manipulation and biosensing, and in healthcare such as targeted drug delivery and minimally invasive surgery. Existing small-scale robots, however, have very limited mobility because they are unable to negotiate obstacles and changes in texture or material in unstructured environments. Of these small-scale robots, soft robots have greater potential to realize high mobility via multimodal locomotion, because such machines have higher degrees of freedom than their rigid counterparts. Here we demonstrate magneto-elastic soft millimetre-scale robots that can swim inside and on the surface of liquids, climb liquid menisci, roll and walk on solid surfaces, jump over obstacles, and crawl within narrow tunnels. These robots can transit reversibly between different liquid and solid terrains, as well as switch between locomotive modes. They can additionally execute pick-and-place and cargo-release tasks. We also present theoretical models to explain how the robots move. Like the large-scale robots that can be used to study locomotion, these soft small-scale robots could be used to study soft-bodied locomotion produced by small organisms.
Small-scale soft-bodied robot with multimodal locomotion
NASA Astrophysics Data System (ADS)
Hu, Wenqi; Lum, Guo Zhan; Mastrangeli, Massimo; Sitti, Metin
2018-02-01
Untethered small-scale (from several millimetres down to a few micrometres in all dimensions) robots that can non-invasively access confined, enclosed spaces may enable applications in microfactories such as the construction of tissue scaffolds by robotic assembly, in bioengineering such as single-cell manipulation and biosensing, and in healthcare such as targeted drug delivery and minimally invasive surgery. Existing small-scale robots, however, have very limited mobility because they are unable to negotiate obstacles and changes in texture or material in unstructured environments. Of these small-scale robots, soft robots have greater potential to realize high mobility via multimodal locomotion, because such machines have higher degrees of freedom than their rigid counterparts. Here we demonstrate magneto-elastic soft millimetre-scale robots that can swim inside and on the surface of liquids, climb liquid menisci, roll and walk on solid surfaces, jump over obstacles, and crawl within narrow tunnels. These robots can transit reversibly between different liquid and solid terrains, as well as switch between locomotive modes. They can additionally execute pick-and-place and cargo-release tasks. We also present theoretical models to explain how the robots move. Like the large-scale robots that can be used to study locomotion, these soft small-scale robots could be used to study soft-bodied locomotion produced by small organisms.
Real-time multiple human perception with color-depth cameras on a mobile robot.
Zhang, Hao; Reardon, Christopher; Parker, Lynne E
2013-10-01
The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.
On the Use of a Low-Cost Thermal Sensor to Improve Kinect People Detection in a Mobile Robot
Susperregi, Loreto; Sierra, Basilio; Castrillón, Modesto; Lorenzo, Javier; Martínez-Otzeta, Jose María; Lazkano, Elena
2013-01-01
Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot or not. Our approach makes use of vision, depth and thermal sensors mounted on top of a mobile platform. The set of sensors is set up combining the rich data source offered by a Kinect sensor, which provides vision and depth at low cost, and a thermopile array sensor. Experimental results carried out with a mobile platform in a manufacturing shop floor and in a science museum have shown that the false positive rate achieved using any single cue is drastically reduced. The performance of our algorithm improves other well-known approaches, such as C4 and histogram of oriented gradients (HOG). PMID:24172285
Object Detection Applied to Indoor Environments for Mobile Robot Navigation.
Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón
2016-07-28
To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.
Object Detection Applied to Indoor Environments for Mobile Robot Navigation
Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón
2016-01-01
To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264
Robot Trajectories Comparison: A Statistical Approach
Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618
Tandem robot control system and method for controlling mobile robots in tandem
Hayward, David R.; Buttz, James H.; Shirey, David L.
2002-01-01
A control system for controlling mobile robots provides a way to control mobile robots, connected in tandem with coupling devices, to navigate across difficult terrain or in closed spaces. The mobile robots can be controlled cooperatively as a coupled system in linked mode or controlled individually as separate robots.
Human-Centered Design and Evaluation of Haptic Cueing for Teleoperation of Multiple Mobile Robots.
Son, Hyoung Il; Franchi, Antonio; Chuang, Lewis L; Kim, Junsuk; Bulthoff, Heinrich H; Giordano, Paolo Robuffo
2013-04-01
In this paper, we investigate the effect of haptic cueing on a human operator's performance in the field of bilateral teleoperation of multiple mobile robots, particularly multiple unmanned aerial vehicles (UAVs). Two aspects of human performance are deemed important in this area, namely, the maneuverability of mobile robots and the perceptual sensitivity of the remote environment. We introduce metrics that allow us to address these aspects in two psychophysical studies, which are reported here. Three fundamental haptic cue types were evaluated. The Force cue conveys information on the proximity of the commanded trajectory to obstacles in the remote environment. The Velocity cue represents the mismatch between the commanded and actual velocities of the UAVs and can implicitly provide a rich amount of information regarding the actual behavior of the UAVs. Finally, the Velocity+Force cue is a linear combination of the two. Our experimental results show that, while maneuverability is best supported by the Force cue feedback, perceptual sensitivity is best served by the Velocity cue feedback. In addition, we show that large gains in the haptic feedbacks do not always guarantee an enhancement in the teleoperator's performance.
A soft robot capable of 2D mobility and self-sensing for obstacle detection and avoidance
NASA Astrophysics Data System (ADS)
Qin, Lei; Tang, Yucheng; Gupta, Ujjaval; Zhu, Jian
2018-04-01
Soft robots have shown great potential for surveillance applications due to their interesting attributes including inherent flexibility, extreme adaptability, and excellent ability to move in confined spaces. High mobility combined with the sensing systems that can detect obstacles plays a significant role in performing surveillance tasks. Extensive studies have been conducted on movement mechanisms of traditional hard-bodied robots to increase their mobility. However, there are limited efforts in the literature to explore the mobility of soft robots. In addition, little attempt has been made to study the obstacle-detection capability of a soft mobile robot. In this paper, we develop a soft mobile robot capable of high mobility and self-sensing for obstacle detection and avoidance. This robot, consisting of a dielectric elastomer actuator as the robot body and four electroadhesion actuators as the robot feet, can generate 2D mobility, i.e. translations and turning in a 2D plane, by programming the actuation sequence of the robot body and feet. Furthermore, we develop a self-sensing method which models the robot body as a deformable capacitor. By measuring the real-time capacitance of the robot body, the robot can detect an obstacle when the peak capacitance drops suddenly. This sensing method utilizes the robot body itself instead of external sensors to achieve detection of obstacles, which greatly reduces the weight and complexity of the robot system. The 2D mobility and self-sensing capability ensure the success of obstacle detection and avoidance, which paves the way for the development of lightweight and intelligent soft mobile robots.
Coordinated Control Of Mobile Robotic Manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1995-01-01
Computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Applicable to variety of mobile robotic manipulators, including robots that move along tracks (typically, painting and welding robots), robots mounted on gantries and capable of moving in all three dimensions, wheeled robots, and compound robots (consisting of robots mounted on other robots). Theoretical basis discussed in several prior articles in NASA Tech Briefs, including "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes With Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).
Soft computing-based terrain visual sensing and data fusion for unmanned ground robotic systems
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir
2006-05-01
In this paper, we have primarily discussed technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain visual clues. The Kalman Filtering technique is applied for aggregative fusion of sub-terrain assessment results. The last two terrain classifiers are shown to have remarkable capability for terrain traversability assessment of natural terrains. We have conducted a comparative performance evaluation of all three terrain classifiers and presented the results in this paper.
NASA Astrophysics Data System (ADS)
Patkin, M. L.; Rogachev, G. N.
2018-02-01
A method for constructing a multi-agent control system for mobile robots based on training with reinforcement using deep neural networks is considered. Synthesis of the management system is proposed to be carried out with reinforcement training and the modified Actor-Critic method, in which the Actor module is divided into Action Actor and Communication Actor in order to simultaneously manage mobile robots and communicate with partners. Communication is carried out by sending partners at each step a vector of real numbers that are added to the observation vector and affect the behaviour. Functions of Actors and Critic are approximated by deep neural networks. The Critics value function is trained by using the TD-error method and the Actor’s function by using DDPG. The Communication Actor’s neural network is trained through gradients received from partner agents. An environment in which a cooperative multi-agent interaction is present was developed, computer simulation of the application of this method in the control problem of two robots pursuing two goals was carried out.
Vision based object pose estimation for mobile robots
NASA Technical Reports Server (NTRS)
Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry
1994-01-01
Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.
Adaptive Control Parameters for Dispersal of Multi-Agent Mobile Ad Hoc Network (MANET) Swarms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
A mobile ad hoc network is a collection of independent nodes that communicate wirelessly with one another. This paper investigates nodes that are swarm robots with communications and sensing capabilities. Each robot in the swarm may operate in a distributed and decentralized manner to achieve some goal. This paper presents a novel approach to dynamically adapting control parameters to achieve mesh configuration stability. The presented approach to robot interaction is based on spring force laws (attraction and repulsion laws) to create near-optimal mesh like configurations. In prior work, we presented the extended virtual spring mesh (EVSM) algorithm for the dispersionmore » of robot swarms. This paper extends the EVSM framework by providing the first known study on the effects of adaptive versus static control parameters on robot swarm stability. The EVSM algorithm provides the following novelties: 1) improved performance with adaptive control parameters and 2) accelerated convergence with high formation effectiveness. Simulation results show that 120 robots reach convergence using adaptive control parameters more than twice as fast as with static control parameters in a multiple obstacle environment.« less
Probabilistic self-localisation on a qualitative map based on occlusions
NASA Astrophysics Data System (ADS)
Santos, Paulo E.; Martins, Murilo F.; Fenelon, Valquiria; Cozman, Fabio G.; Dee, Hannah M.
2016-09-01
Spatial knowledge plays an essential role in human reasoning, permitting tasks such as locating objects in the world (including oneself), reasoning about everyday actions and describing perceptual information. This is also the case in the field of mobile robotics, where one of the most basic (and essential) tasks is the autonomous determination of the pose of a robot with respect to a map, given its perception of the environment. This is the problem of robot self-localisation (or simply the localisation problem). This paper presents a probabilistic algorithm for robot self-localisation that is based on a topological map constructed from the observation of spatial occlusion. Distinct locations on the map are defined by means of a classical formalism for qualitative spatial reasoning, whose base definitions are closer to the human categorisation of space than traditional, numerical, localisation procedures. The approach herein proposed was systematically evaluated through experiments using a mobile robot equipped with a RGB-D sensor. The results obtained show that the localisation algorithm is successful in locating the robot in qualitatively distinct regions.
Mobile robot exploration and navigation of indoor spaces using sonar and vision
NASA Technical Reports Server (NTRS)
Kortenkamp, David; Huber, Marcus; Koss, Frank; Belding, William; Lee, Jaeho; Wu, Annie; Bidlack, Clint; Rodgers, Seth
1994-01-01
Integration of skills into an autonomous robot that performs a complex task is described. Time constraints prevented complete integration of all the described skills. The biggest problem was tuning the sensor-based region-finding algorithm to the environment involved. Since localization depended on matching regions found with the a priori map, the robot became lost very quickly. If the low level sensing of the world is not working, then high level reasoning or map making will be unsuccessful.
Multi-Robot Search for a Moving Target: Integrating World Modeling, Task Assignment and Context
2016-12-01
Case Study Our approach to coordination was initially motivated and developed in RoboCup soccer games. In fact, it has been first deployed on a team of...features a rather accurate model of the behavior and capabilities of the humanoid robot in the field. In the soccer case study , our goal is to...on experiments carried out with a team of humanoid robots in a soccer scenario and a team of mobile bases in an office environment. I. INTRODUCTION
Filtering Data Based on Human-Inspired Forgetting.
Freedman, S T; Adams, J A
2011-12-01
Robots are frequently presented with vast arrays of diverse data. Unfortunately, perfect memory and recall provides a mixed blessing. While flawless recollection of episodic data allows increased reasoning, photographic memory can hinder a robot's ability to operate in real-time dynamic environments. Human-inspired forgetting methods may enable robotic systems to rid themselves of out-dated, irrelevant, and erroneous data. This paper presents the use of human-inspired forgetting to act as a filter, removing unnecessary, erroneous, and out-of-date information. The novel ActSimple forgetting algorithm has been developed specifically to provide effective forgetting capabilities to robotic systems. This paper presents the ActSimple algorithm and how it was optimized and tested in a WiFi signal strength estimation task. The results generated by real-world testing suggest that human-inspired forgetting is an effective means of improving the ability of mobile robots to move and operate within complex and dynamic environments.
A design strategy for autonomous systems
NASA Technical Reports Server (NTRS)
Forster, Pete
1989-01-01
Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.
NASA Astrophysics Data System (ADS)
Laird, John E.
2009-05-01
Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the recent integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training environments and interactive computer games. For development and testing in robotic virtual environments, Soar interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to Soar that add new memories and new non-symbolic reasoning to Soar's original symbolic processing, which should significantly improve Soar abilities for control of robots. These extensions include episodic memory, semantic memory, reinforcement learning, and mental imagery. Episodic memory and semantic memory support the learning and recalling of prior events and situations as well as facts about the world. Reinforcement learning provides the ability of the system to tune its procedural knowledge - knowledge about how to do things. Mental imagery supports the use of diagrammatic and visual representations that are critical to support spatial reasoning. We speculate on the future of unmanned systems and the need for cognitive robotics to support dynamic instruction and taskability.
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930
Estimation of visual maps with a robot network equipped with vision sensors.
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
The Performance Analysis of AN Indoor Mobile Mapping System with Rgb-D Sensor
NASA Astrophysics Data System (ADS)
Tsai, G. J.; Chiang, K. W.; Chu, C. H.; Chen, Y. L.; El-Sheimy, N.; Habib, A.
2015-08-01
Over the years, Mobile Mapping Systems (MMSs) have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM). The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG) performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU), the Kinect RGB-D sensor and light detection, ranging (LIDAR) and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.
NASA Astrophysics Data System (ADS)
Lei, Jingtao; Yu, Huangying; Wang, Tianmiao
2016-01-01
The body of quadruped robot is generally developed with the rigid structure. The mobility of quadruped robot depends on the mechanical properties of the body mechanism. It is difficult for quadruped robot with rigid structure to achieve better mobility walking or running in the unstructured environment. A kind of bionic flexible body mechanism for quadruped robot is proposed, which is composed of one bionic spine and four pneumatic artificial muscles(PAMs). This kind of body imitates the four-legged creatures' kinematical structure and physical properties, which has the characteristic of changeable stiffness, lightweight, flexible and better bionics. The kinematics of body bending is derived, and the coordinated movement between the flexible body and legs is analyzed. The relationship between the body bending angle and the PAM length is obtained. The dynamics of the body bending is derived by the floating coordinate method and Lagrangian method, and the driving force of PAM is determined. The experiment of body bending is conducted, and the dynamic bending characteristic of bionic flexible body is evaluated. Experimental results show that the bending angle of the bionic flexible body can reach 18°. An innovation body mechanism for quadruped robot is proposed, which has the characteristic of flexibility and achieve bending by changing gas pressure of PAMs. The coordinated movement of the body and legs can achieve spinning gait in order to improve the mobility of quadruped robot.
Task automation in a successful industrial telerobot
NASA Technical Reports Server (NTRS)
Spelt, Philip F.; Jones, Sammy L.
1994-01-01
In this paper, we discuss cooperative work by Oak Ridge National Laboratory and Remotec, Inc., to automate components of the operator's workload using Remotec's Andros telerobot, thereby providing an enhanced user interface which can be retrofit to existing fielded units as well as being incorporated into new production units. Remotec's Andros robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as by the armed forces and numerous law enforcement agencies. The automation of task components, as well as the video graphics display of the robot's position in the environment, will enhance all tasks performed by these users, as well as enabling performance in terrain where the robots cannot presently perform due to lack of knowledge about, for instance, the degree of tilt of the robot. Enhanced performance of a successful industrial mobile robot leads to increased safety and efficiency of performance in hazardous environments. The addition of these capabilities will greatly enhance the utility of the robot, as well as its marketability.
NASA Technical Reports Server (NTRS)
Chen, Alexander Y.
1990-01-01
Scientific research associates advanced robotic system (SRAARS) is an intelligent robotic system which has autonomous learning capability in geometric reasoning. The system is equipped with one global intelligence center (GIC) and eight local intelligence centers (LICs). It controls mainly sixteen links with fourteen active joints, which constitute two articulated arms, an extensible lower body, a vision system with two CCD cameras and a mobile base. The on-board knowledge-based system supports the learning controller with model representations of both the robot and the working environment. By consecutive verifying and planning procedures, hypothesis-and-test routines and learning-by-analogy paradigm, the system would autonomously build up its own understanding of the relationship between itself (i.e., the robot) and the focused environment for the purposes of collision avoidance, motion analysis and object manipulation. The intelligence of SRAARS presents a valuable technical advantage to implement robotic systems for space exploration and space station operations.
Towards Assessing the Human Trajectory Planning Horizon
Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk
2016-01-01
Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models. PMID:27936015
Towards Assessing the Human Trajectory Planning Horizon.
Carton, Daniel; Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk
2016-01-01
Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models.
Biomedical Applications of Untethered Mobile Milli/Microrobots
Sitti, Metin; Ceylan, Hakan; Hu, Wenqi; Giltinan, Joshua; Turan, Mehmet; Yim, Sehyuk; Diller, Eric
2016-01-01
Untethered robots miniaturized to the length scale of millimeter and below attract growing attention for the prospect of transforming many aspects of health care and bioengineering. As the robot size goes down to the order of a single cell, previously inaccessible body sites would become available for high-resolution in situ and in vivo manipulations. This unprecedented direct access would enable an extensive range of minimally invasive medical operations. Here, we provide a comprehensive review of the current advances in biome dical untethered mobile milli/microrobots. We put a special emphasis on the potential impacts of biomedical microrobots in the near future. Finally, we discuss the existing challenges and emerging concepts associated with designing such a miniaturized robot for operation inside a biological environment for biomedical applications. PMID:27746484
Cooperative Autonomous Robots for Reconnaissance
2009-03-06
REPORT Cooperative Autonomous Robots for Reconnaissance 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Collaborating mobile robots equipped with WiFi ...Cooperative Autonomous Robots for Reconnaissance Report Title ABSTRACT Collaborating mobile robots equipped with WiFi transceivers are configured as a mobile...equipped with WiFi transceivers are configured as a mobile ad-hoc network. Algorithms are developed to take advantage of the distributed processing
Evolving mobile robots able to display collective behaviors.
Baldassarre, Gianluca; Nolfi, Stefano; Parisi, Domenico
2003-01-01
We present a set of experiments in which simulated robots are evolved for the ability to aggregate and move together toward a light target. By developing and using quantitative indexes that capture the structural properties of the emerged formations, we show that evolved individuals display interesting behavioral patterns in which groups of robots act as a single unit. Moreover, evolved groups of robots with identical controllers display primitive forms of situated specialization and play different behavioral functions within the group according to the circumstances. Overall, the results presented in the article demonstrate that evolutionary techniques, by exploiting the self-organizing behavioral properties that emerge from the interactions between the robots and between the robots and the environment, are a powerful method for synthesizing collective behavior.
NASA Technical Reports Server (NTRS)
Welch, Richard V.; Edmonds, Gary O.
1994-01-01
The use of robotics in situations involving hazardous materials can significantly reduce the risk of human injuries. The Emergency Response Robotics Project, which began in October 1990 at the Jet Propulsion Laboratory, is developing a teleoperated mobile robot allowing HAZMAT (hazardous materials) teams to remotely respond to incidents involving hazardous materials. The current robot, called HAZBOT III, can assist in locating characterizing, identifying, and mitigating hazardous material incidents without risking entry team personnel. The active involvement of the JPL Fire Department HAZMAT team has been vital in developing a robotic system which enables them to perform remote reconnaissance of a HAZMAT incident site. This paper provides a brief review of the history of the project, discusses the current system in detail, and presents other areas in which robotics can be applied removing people from hazardous environments/operations.
The magic glove: a gesture-based remote controller for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark
2012-01-01
This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.
Behavior Selection of Mobile Robot Based on Integration of Multimodal Information
NASA Astrophysics Data System (ADS)
Chen, Bin; Kaneko, Masahide
Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.
Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning
2018-03-16
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.
NASA Astrophysics Data System (ADS)
Kortenkamp, David; Huber, Marcus J.; Congdon, Clare B.; Huffman, Scott B.; Bidlack, Clint R.; Cohen, Charles J.; Koss, Frank V.; Raschke, Ulrich; Weymouth, Terry E.
1993-05-01
This paper describes the design and implementation of an integrated system for combining obstacle avoidance, path planning, landmark detection and position triangulation. Such an integrated system allows the robot to move from place to place in an environment, avoiding obstacles and planning its way out of traps, while maintaining its position and orientation using distinctive landmarks. The task the robot performs is to search a 22 m X 22 m arena for 10 distinctive objects, visiting each object in turn. This same task was recently performed by a dozen different robots at a competition in which the robot described in this paper finished first.
Adapting an Ant Colony Metaphor for Multi-Robot Chemical Plume Tracing
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Li, Fei; Zeng, Ming
2012-01-01
We consider chemical plume tracing (CPT) in time-varying airflow environments using multiple mobile robots. The purpose of CPT is to approach a gas source with a previously unknown location in a given area. Therefore, the CPT could be considered as a dynamic optimization problem in continuous domains. The traditional ant colony optimization (ACO) algorithm has been successfully used for combinatorial optimization problems in discrete domains. To adapt the ant colony metaphor to the multi-robot CPT problem, the two-dimension continuous search area is discretized into grids and the virtual pheromone is updated according to both the gas concentration and wind information. To prevent the adapted ACO algorithm from being prematurely trapped in a local optimum, the upwind surge behavior is adopted by the robots with relatively higher gas concentration in order to explore more areas. The spiral surge (SS) algorithm is also examined for comparison. Experimental results using multiple real robots in two indoor natural ventilated airflow environments show that the proposed CPT method performs better than the SS algorithm. The simulation results for large-scale advection-diffusion plume environments show that the proposed method could also work in outdoor meandering plume environments. PMID:22666056
Adapting an ant colony metaphor for multi-robot chemical plume tracing.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Li, Fei; Zeng, Ming
2012-01-01
We consider chemical plume tracing (CPT) in time-varying airflow environments using multiple mobile robots. The purpose of CPT is to approach a gas source with a previously unknown location in a given area. Therefore, the CPT could be considered as a dynamic optimization problem in continuous domains. The traditional ant colony optimization (ACO) algorithm has been successfully used for combinatorial optimization problems in discrete domains. To adapt the ant colony metaphor to the multi-robot CPT problem, the two-dimension continuous search area is discretized into grids and the virtual pheromone is updated according to both the gas concentration and wind information. To prevent the adapted ACO algorithm from being prematurely trapped in a local optimum, the upwind surge behavior is adopted by the robots with relatively higher gas concentration in order to explore more areas. The spiral surge (SS) algorithm is also examined for comparison. Experimental results using multiple real robots in two indoor natural ventilated airflow environments show that the proposed CPT method performs better than the SS algorithm. The simulation results for large-scale advection-diffusion plume environments show that the proposed method could also work in outdoor meandering plume environments.
Buttz, James H.; Shirey, David L.; Hayward, David R.
2003-01-01
A robotic vehicle system for terrain navigation mobility provides a way to climb stairs, cross crevices, and navigate across difficult terrain by coupling two or more mobile robots with a coupling device and controlling the robots cooperatively in tandem.
Polar Seismic TETwalker: Integrating Engineering Teaching and Research
NASA Astrophysics Data System (ADS)
Gifford, C. M.; Ruiz, I.; Carmichael, B. L.; Wade, U. B.; Agah, A.
2007-12-01
Based on the TETwalker robot platform at NASA/Goddard Space Flight Center, the Center for Remote Sensing of Ice Sheets (CReSIS) has begun work on designing and modeling the integration of seismic surveying equipment into the TETwalker robot architecture for use in polar environments. Employing multiple Seismic TETwalker robots will allow gathering of polar seismic data in previously inaccessible or unexplored terrains, as well as help significantly reduce human involvement in such harsh environments. NASA's TETwalker mobile robot uses a unique form of mobility to topple across the surface and over obstacles. This robot therefore does not suffer the fate of other wheeled and tracked robots if tipped over. It is composed of extending struts and nodes, forming a tetrahedral shape which can be strategically adjusted to change the robot's center of gravity for toppling. Of the many platforms the TETwalker architecture can form, the 4-TETwalker robot (consisting of four ground nodes, a center payload node, and interconnecting struts) has been the focus of current research. The center node has been chosen as the geophone deployment medium, designed in such a way to allow geophone insertion using any face of the robot's structure. As the robot comes to rest at the deployment location, one of its faces will rest on the surface. No matter which side it is resting on, a geophone spike will be perpendicular to its face and an extending strut will be vertical for pushing the geophone into the ground. Lengthening and shortening struts allow the deployment node to precisely place the geophone into the ground, as well as vertically orient the geophones for proper data acquisition on non-flat surfaces. Power source integration has been investigated, incorporating possible combinations of solar, wind, and vibration power devices onboard the robot models for long-term survival in a polar environment. Designs have also been modeled for an alternate center node sensor package (e.g., broadband seismometer) and other structures of the node-and-strut TETwalker robot architecture. It is planned to take the design models and construct a physical prototype for future testing in Greenland and Antarctica. This work involved three undergraduate students from underrepresented groups as part of the CReSIS Summer REU program, aimed at involving these groups in science and engineering research.
Ahmad, Faisul Arif; Ramli, Abd Rahman; Samsudin, Khairulmizam; Hashim, Shaiful Jahari
2014-01-01
Deploying large numbers of mobile robots which can interact with each other produces swarm intelligent behavior. However, mobile robots are normally running with finite energy resource, supplied from finite battery. The limitation of energy resource required human intervention for recharging the batteries. The sharing information among the mobile robots would be one of the potentials to overcome the limitation on previously recharging system. A new approach is proposed based on integrated intelligent system inspired by foraging of honeybees applied to multimobile robot scenario. This integrated approach caters for both working and foraging stages for known/unknown power station locations. Swarm mobile robot inspired by honeybee is simulated to explore and identify the power station for battery recharging. The mobile robots will share the location information of the power station with each other. The result showed that mobile robots consume less energy and less time when they are cooperating with each other for foraging process. The optimizing of foraging behavior would result in the mobile robots spending more time to do real work.
Ahmad, Faisul Arif; Ramli, Abd Rahman; Samsudin, Khairulmizam; Hashim, Shaiful Jahari
2014-01-01
Deploying large numbers of mobile robots which can interact with each other produces swarm intelligent behavior. However, mobile robots are normally running with finite energy resource, supplied from finite battery. The limitation of energy resource required human intervention for recharging the batteries. The sharing information among the mobile robots would be one of the potentials to overcome the limitation on previously recharging system. A new approach is proposed based on integrated intelligent system inspired by foraging of honeybees applied to multimobile robot scenario. This integrated approach caters for both working and foraging stages for known/unknown power station locations. Swarm mobile robot inspired by honeybee is simulated to explore and identify the power station for battery recharging. The mobile robots will share the location information of the power station with each other. The result showed that mobile robots consume less energy and less time when they are cooperating with each other for foraging process. The optimizing of foraging behavior would result in the mobile robots spending more time to do real work. PMID:24949491
Behavior coordination of mobile robotics using supervisory control of fuzzy discrete event systems.
Jayasiri, Awantha; Mann, George K I; Gosine, Raymond G
2011-10-01
In order to incorporate the uncertainty and impreciseness present in real-world event-driven asynchronous systems, fuzzy discrete event systems (DESs) (FDESs) have been proposed as an extension to crisp DESs. In this paper, first, we propose an extension to the supervisory control theory of FDES by redefining fuzzy controllable and uncontrollable events. The proposed supervisor is capable of enabling feasible uncontrollable and controllable events with different possibilities. Then, the extended supervisory control framework of FDES is employed to model and control several navigational tasks of a mobile robot using the behavior-based approach. The robot has limited sensory capabilities, and the navigations have been performed in several unmodeled environments. The reactive and deliberative behaviors of the mobile robotic system are weighted through fuzzy uncontrollable and controllable events, respectively. By employing the proposed supervisory controller, a command-fusion-type behavior coordination is achieved. The observability of fuzzy events is incorporated to represent the sensory imprecision. As a systematic analysis of the system, a fuzzy-state-based controllability measure is introduced. The approach is implemented in both simulation and real time. A performance evaluation is performed to quantitatively estimate the validity of the proposed approach over its counterparts.
A Unified Approach to Motion Control of Motion Robots
NASA Technical Reports Server (NTRS)
Seraji, H.
1994-01-01
This paper presents a simple on-line approach for motion control of mobile robots made up of a manipulator arm mounted on a mobile base. The proposed approach is equally applicable to nonholonomic mobile robots, such as rover-mounted manipulators and to holonomic mobile robots such as tracked robots or compound manipulators. The computational efficiency of the proposed control scheme makes it particularly suitable for real-time implementation.
Embodying a cognitive model in a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Lyons, Damian; Lonsdale, Deryle
2006-10-01
The ADAPT project is a collaboration of researchers in robotics, linguistics and artificial intelligence at three universities to create a cognitive architecture specifically designed to be embodied in a mobile robot. There are major respects in which existing cognitive architectures are inadequate for robot cognition. In particular, they lack support for true concurrency and for active perception. ADAPT addresses these deficiencies by modeling the world as a network of concurrent schemas, and modeling perception as problem solving. Schemas are represented using the RS (Robot Schemas) language, and are activated by spreading activation. RS provides a powerful language for distributed control of concurrent processes. Also, The formal semantics of RS provides the basis for the semantics of ADAPT's use of natural language. We have implemented the RS language in Soar, a mature cognitive architecture originally developed at CMU and used at a number of universities and companies. Soar's subgoaling and learning capabilities enable ADAPT to manage the complexity of its environment and to learn new schemas from experience. We describe the issues faced in developing an embodied cognitive architecture, and our implementation choices.
A small, cheap, and portable reconnaissance robot
NASA Astrophysics Data System (ADS)
Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey
2005-05-01
While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.
Welding torch trajectory generation for hull joining using autonomous welding mobile robot
NASA Astrophysics Data System (ADS)
Hascoet, J. Y.; Hamilton, K.; Carabin, G.; Rauch, M.; Alonso, M.; Ares, E.
2012-04-01
Shipbuilding processes involve highly dangerous manual welding operations. Welding of ship hulls presents a hazardous environment for workers. This paper describes a new robotic system, developed by the SHIPWELD consortium, that moves autonomously on the hull and automatically executes the required welding processes. Specific focus is placed on the trajectory control of such a system and forms the basis for the discussion in this paper. It includes a description of the robotic hardware design as well as some methodology used to establish the torch trajectory control.
Stereo vision tracking of multiple objects in complex indoor environments.
Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro
2010-01-01
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.
(abstract) A Mobile Robot for Remote Response to Incidents Involving Hazardous Materials
NASA Technical Reports Server (NTRS)
Welch, Richard V.
1994-01-01
This paper will report the status of the Emergency Response Robotics project, a teleoperated mobile robot system being developed at JPL for use by the JPL Fire Department/HAZMAT Team. The project, which began in 1991, has been focused on developing a robotic vehicle which can be quickly deployed by HAZMAT Team personnel for first entry into an incident site. The primary goals of the system are to gain access to the site, locate and identify the hazard, and aid in its mitigation. The involvement of JPL Fire Department/HAZMAT Team personnel has been critical in guiding the design and evaluation of the system. A unique feature of the current robot, called HAZBOT III, is its special design for operation in combustible environments. This includes the use of all solid state electronics, brushless motors, and internal pressurization. Demonstration and testing of the system with HAZMAT Team personnel has shown that teleoperated robots, such as HAZBOT III, can successfully gain access to incident sites locating and identifying hazardous material spills. Work is continuing to enable more complex missions through the addition of appropriate sensor technology and enhancement of the operator interface.
Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.
Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta
2010-01-01
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.
Perspectives on mobile robots as tools for child development and pediatric rehabilitation.
Michaud, François; Salter, Tamie; Duquette, Audrey; Laplante, Jean-François
2007-01-01
Mobile robots (i.e., robots capable of translational movements) can be designed to become interesting tools for child development studies and pediatric rehabilitation. In this article, the authors present two of their projects that involve mobile robots interacting with children: One is a spherical robot deployed in a variety of contexts, and the other is mobile robots used as pedagogical tools for children with pervasive developmental disorders. Locomotion capability appears to be key in creating meaningful and sustained interactions with children: Intentional and purposeful motion is an implicit appealing factor in obtaining children's attention and engaging them in interaction and learning. Both of these projects started with robotic objectives but are revealed to be rich sources of interdisciplinary collaborations in the field of assistive technology. This article presents perspectives on how mobile robots can be designed to address the requirements of child-robot interactions and studies. The authors also argue that mobile robot technology can be a useful tool in rehabilitation engineering, reaching its full potential through strong collaborations between roboticists and pediatric specialists.
Learning models of Human-Robot Interaction from small data
Zehfroosh, Ashkan; Kokkoni, Elena; Tanner, Herbert G.; Heinz, Jeffrey
2018-01-01
This paper offers a new approach to learning discrete models for human-robot interaction (HRI) from small data. In the motivating application, HRI is an integral part of a pediatric rehabilitation paradigm that involves a play-based, social environment aiming at improving mobility for infants with mobility impairments. Designing interfaces in this setting is challenging, because in order to harness, and eventually automate, the social interaction between children and robots, a behavioral model capturing the causality between robot actions and child reactions is needed. The paper adopts a Markov decision process (MDP) as such a model, and selects the transition probabilities through an empirical approximation procedure called smoothing. Smoothing has been successfully applied in natural language processing (NLP) and identification where, similarly to the current paradigm, learning from small data sets is crucial. The goal of this paper is two-fold: (i) to describe our application of HRI, and (ii) to provide evidence that supports the application of smoothing for small data sets. PMID:29492408
Learning models of Human-Robot Interaction from small data.
Zehfroosh, Ashkan; Kokkoni, Elena; Tanner, Herbert G; Heinz, Jeffrey
2017-07-01
This paper offers a new approach to learning discrete models for human-robot interaction (HRI) from small data. In the motivating application, HRI is an integral part of a pediatric rehabilitation paradigm that involves a play-based, social environment aiming at improving mobility for infants with mobility impairments. Designing interfaces in this setting is challenging, because in order to harness, and eventually automate, the social interaction between children and robots, a behavioral model capturing the causality between robot actions and child reactions is needed. The paper adopts a Markov decision process (MDP) as such a model, and selects the transition probabilities through an empirical approximation procedure called smoothing. Smoothing has been successfully applied in natural language processing (NLP) and identification where, similarly to the current paradigm, learning from small data sets is crucial. The goal of this paper is two-fold: (i) to describe our application of HRI, and (ii) to provide evidence that supports the application of smoothing for small data sets.
Nonuniform Deployment of Autonomous Agents in Harbor-Like Environments
2014-11-12
ith agent than to all other agents. Interested readers are referred to [55] for the comprehensive study on Voronoi partitioning and its applications...robots: An rfid approach, PhD dissertation, School of Electrical Engi- neering and Computer Science, University of Ottawa (October 2012). [55] A. Okabe, B...Gueaieb, A stochastic approach of mobile robot navigation using customized rfid sys- tems, International Conference on Signals, Circuits and Systems
Evaluating the Dynamics of Agent-Environment Interaction
2001-05-01
a color sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangulation system for...Cooperative Mobile Robot Control’, Autonomous Robots 4(4), 387{403. Vaughan, R. T., Sty, K., Sukhatme, G. S. & Mataric, M. J. (2000), Whistling in the Dark...sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangu- lation system for
Juang, Chia-Feng; Lai, Min-Ge; Zeng, Wan-Ting
2015-09-01
This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.
Autonomous mobile robot research using the HERMIES-III robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; Beckerman, M.; Spelt, P.F.
1989-01-01
This paper reports on the status and future directions in the research, development and experimental validation of intelligent control techniques for autonomous mobile robots using the HERMIES-III robot at the Center for Engineering Systems Advanced research (CESAR) at Oak Ridge National Laboratory (ORNL). HERMIES-III is the fourth robot in a series of increasingly more sophisticated and capable experimental test beds developed at CESAR. HERMIES-III is comprised of a battery powered, onmi-directional wheeled platform with a seven degree-of-freedom manipulator arm, video cameras, sonar range sensors, laser imaging scanner and a dual computer system containing up to 128 NCUBE nodes in hypercubemore » configuration. All electronics, sensors, computers, and communication equipment required for autonomous operation of HERMIES-III are located on board along with sufficient battery power for three to four hours of operation. The paper first provides a more detailed description of the HERMIES-III characteristics, focussing on the new areas of research and demonstration now possible at CESAR with this new test-bed. The initial experimental program is then described with emphasis placed on autonomous performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES- III). The paper concludes with a discussion of the integration problems and safety considerations necessarily arising from the set-up of an experimental program involving human-scale, multi-autonomous mobile robots performance. 10 refs., 3 figs.« less
Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu
2016-01-01
In this paper, the problem of object caging and transporting is considered for multiple mobile robots. With the consideration of minimizing the number of robots and decreasing the rotation of the object, the proper points are calculated and assigned to the multiple mobile robots to allow them to form a symmetric caging formation. The caging formation guarantees that all of the Euclidean distances between any two adjacent robots are smaller than the minimal width of the polygonal object so that the object cannot escape. In order to avoid collision among robots, the parameter of the robots radius is utilized to design the caging formation, and the A⁎ algorithm is used so that mobile robots can move to the proper points. In order to avoid obstacles, the robots and the object are regarded as a rigid body to apply artificial potential field method. The fuzzy sliding mode control method is applied for tracking control of the nonholonomic mobile robots. Finally, the simulation and experimental results show that multiple mobile robots are able to cage and transport the polygonal object to the goal position, avoiding obstacles. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Maneuverability and mobility in palm-sized legged robots
NASA Astrophysics Data System (ADS)
Kohut, Nicholas J.; Birkmeyer, Paul M.; Peterson, Kevin C.; Fearing, Ronald S.
2012-06-01
Palm sized legged robots show promise for military and civilian applications, including exploration of hazardous or difficult to reach places, search and rescue, espionage, and battlefield reconnaissance. However, they also face many technical obstacles, including- but not limited to- actuator performance, weight constraints, processing power, and power density. This paper presents an overview of several robots from the Biomimetic Millisystems Laboratory at UC Berkeley, including the OctoRoACH, a steerable, running legged robot capable of basic navigation and equipped with a camera and active tail; CLASH, a dynamic climbing robot; and BOLT, a hybrid crawling and flying robot. The paper also discusses, and presents some preliminary solutions to, the technical obstacles listed above plus issues such as robustness to unstructured environments, limited sensing and communication bandwidths, and system integration.
Using mixed-initiative human-robot interaction to bound performance in a search task
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; Douglas A. Few; Devin S. Athey
2008-12-01
Mobile robots are increasingly used in dangerous domains, because they can keep humans out of harm’s way. Despite their advantages in hazardous environments, their general acceptance in other less dangerous domains has not been apparent and, even in dangerous environments, robots are often viewed as a “last-possible choice.” In order to increase the utility and acceptance of robots in hazardous domains researchers at the Idaho National Laboratory have both developed and tested novel mixed-initiative solutions that support the human-robot interactions. In a recent “dirty-bomb” experiment, participants exhibited different search strategies making it difficult to determine any performance benefits. This papermore » presents a method for categorizing the search patterns and shows that the mixed-initiative solution decreased the time to complete the task and decreased the performance spread between participants independent of prior training and of individual strategies used to accomplish the task.« less
Development and validation of a low-cost mobile robotics testbed
NASA Astrophysics Data System (ADS)
Johnson, Michael; Hayes, Martin J.
2012-03-01
This paper considers the design, construction and validation of a low-cost experimental robotic testbed, which allows for the localisation and tracking of multiple robotic agents in real time. The testbed system is suitable for research and education in a range of different mobile robotic applications, for validating theoretical as well as practical research work in the field of digital control, mobile robotics, graphical programming and video tracking systems. It provides a reconfigurable floor space for mobile robotic agents to operate within, while tracking the position of multiple agents in real-time using the overhead vision system. The overall system provides a highly cost-effective solution to the topical problem of providing students with practical robotics experience within severe budget constraints. Several problems encountered in the design and development of the mobile robotic testbed and associated tracking system, such as radial lens distortion and the selection of robot identifier templates are clearly addressed. The testbed performance is quantified and several experiments involving LEGO Mindstorm NXT and Merlin System MiaBot robots are discussed.
Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae
2012-12-12
Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.
Extending the Capability of Mars Umbilical Technology Demonstrator
NASA Technical Reports Server (NTRS)
Houshangi, Nasser
2001-01-01
The objective of this project is to expand the capabilities of for the Mars Umbilical Technology Demonstrator (MUTD). The MUTD shall provide electrical power and fiber optic data cable connections between two simulated mars vehicles, 1000 in apart. ne wheeled mobile robot Omnibot is used to provide the mobile base for the system. The mate-to umbilical plate is mounted on a Cartesian robot, which is installed on the Omnibot mobile base. It is desirable to provide the operator controlling the Omnibot, the distance and direction to the target. In this report, an approach for finding the position and orientation of the mobile robot using inertial sensors and beacons is investigated. First phase of the project considered the Omnibot being on the flat surface. To deal with the uneven Mars environment, the orientation as well as position needs to be controlled. During local positioning, the information received from four ultrasonic sensors installed at the four corner of the mate-mi plate is used to identify the position of mate-to plate and mate the umbilical plates autonomously. The work proposed is the continuation of the principal investigator research effort as a participant in the 1999 NASA/ASEE Summer Faculty Fellowship Program.
Agarwal, Rahul; Levinson, Adam W; Allaf, Mohamad; Makarov, Danil; Nason, Alex; Su, Li-Ming
2007-11-01
Remote presence is the ability of an individual to project himself from one location to another to see, hear, roam, talk, and interact just as if that individual were actually there. The objective of this study was to evaluate the efficacy and functionality of a novel mobile robotic telementoring system controlled by a portable laptop control station linked via broadband Internet connection. RoboConsultant (RemotePresence-7; InTouch Health, Sunnyvale, CA) was employed for the purpose of intraoperative telementoring and consultation during five laparoscopic and endoscopic urologic procedures. Robot functionality including navigation, zoom capability, examination of external and internal endoscopic camera views, and telestration were evaluated. The robot was controlled by a senior surgeon from various locations ranging from an adjacent operating room to an affiliated hospital 5 miles away. The RoboConsultant performed without connection failure or interruption in each case, allowing the consulting surgeon to immerse himself and navigate within the operating room environment and provide effective communication, mentoring, telestration, and consultation. RoboConsultant provided clear, real-time, and effective telementoring and telestration and allowed the operator to experience remote presence in the operating room environment as a surgical consultant. The portable laptop control station and wireless connectivity allowed the consultant to be mobile and interact with the operating room team from virtually any location. In the future, the remote presence provided by the RoboConsultant may provide useful and effective intraoperative consultation by expert surgeons located in remote sites.
NASA Astrophysics Data System (ADS)
Haq, R.; Prayitno, H.; Dzulkiflih; Sucahyo, I.; Rahmawati, E.
2018-03-01
In this article, the development of a low cost mobile robot based on PID controller and odometer for education is presented. PID controller and odometer is applied for controlling mobile robot position. Two-dimensional position vector in cartesian coordinate system have been inserted to robot controller as an initial and final position. Mobile robot has been made based on differential drive and sensor magnetic rotary encoder which measured robot position from a number of wheel rotation. Odometry methode use data from actuator movements for predicting change of position over time. The mobile robot is examined to get final position with three different heading angle 30°, 45° and 60° by applying various value of KP, KD and KI constant.
Research state-of-the-art of mobile robots in China
NASA Astrophysics Data System (ADS)
Wu, Lin; Zhao, Jinglun; Zhang, Peng; Li, Shiqing
1991-03-01
Several newly developed mobile robots in china are described in the paper. It includes masterslave telerobot sixleged robot biped walking robot remote inspection robot crawler moving robot and autonomous mobi le vehicle . Some relevant technology are also described.
Prediction of dry ice mass for firefighting robot actuation
NASA Astrophysics Data System (ADS)
Ajala, M. T.; Khan, Md R.; Shafie, A. A.; Salami, MJE; Mohamad Nor, M. I.
2017-11-01
The limitation in the performance of electric actuated firefighting robots in high-temperature fire environment has led to research on the alternative propulsion system for the mobility of firefighting robots in such environment. Capitalizing on the limitations of these electric actuators we suggested a gas-actuated propulsion system in our earlier study. The propulsion system is made up of a pneumatic motor as the actuator (for the robot) and carbon dioxide gas (self-generated from dry ice) as the power source. To satisfy the consumption requirement (9cfm) of the motor for efficient actuation of the robot in the fire environment, the volume of carbon dioxide gas, as well as the corresponding mass of the dry ice that will produce the required volume for powering and actuation of the robot, must be determined. This article, therefore, presents the computational analysis to predict the volumetric requirement and the dry ice mass sufficient to power a carbon dioxide gas propelled autonomous firefighting robot in a high-temperature environment. The governing equation of the sublimation of dry ice to carbon dioxide is established. An operating time of 2105.53s and operating pressure ranges from 137.9kPa to 482.65kPa were achieved following the consumption rate of the motor. Thus, 8.85m3 is computed as the volume requirement of the CAFFR while the corresponding dry ice mass for the CAFFR actuation ranges from 21.67kg to 75.83kg depending on the operating pressure.
Promoting Diversity in Undergraduate Research in Robotics-Based Seismic
NASA Astrophysics Data System (ADS)
Gifford, C. M.; Arthur, C. L.; Carmichael, B. L.; Webber, G. K.; Agah, A.
2006-12-01
The motivation for this research was to investigate forming evenly-spaced grid patterns with a team of mobile robots for future use in seismic imaging in polar environments. A team of robots was incrementally designed and simulated by incorporating sensors and altering each robot's controller. Challenges, design issues, and efficiency were also addressed. This research project incorporated the efforts of two undergraduate REU students from Elizabeth City State University (ECSU) in North Carolina, and the research staff at the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas. ECSU is a historically black university. Mentoring these two minority students in scientific research, seismic, robotics, and simulation will hopefully encourage them to pursue graduate degrees in science-related or engineering fields. The goals for this 10-week internship during summer 2006 were to educate the students in the fields of seismology, robotics, and virtual prototyping and simulation. Incrementally designing a robot platform for future enhancement and evaluation was central to this research, and involved simulation of several robots working together to change seismic grid shape and spacing. This process gave these undergraduate students experience and knowledge in an actual research project for a real-world application. The two undergraduate students gained valuable research experience and advanced their knowledge of seismic imaging, robotics, sensors, and simulation. They learned that seismic sensors can be used in an array to gather 2D and 3D images of the subsurface. They also learned that robotics can support dangerous or difficult human activities, such as those in a harsh polar environment, by increasing automation, robustness, and precision. Simulating robot designs also gave them experience in programming behaviors for mobile robots. Thus far, one academic paper has resulted from their research. This paper received third place at the 2006 National Technical Association's (NTA) National Conference in Chicago. CReSIS, in conjunction with ECSU, provided these minority students with a well-rounded educational experience in a real-world research project. Their contributions will be used for future projects.
Hybrid Exploration Agent Platform and Sensor Web System
NASA Technical Reports Server (NTRS)
Stoffel, A. William; VanSteenberg, Michael E.
2004-01-01
A sensor web to collect the scientific data needed to further exploration is a major and efficient asset to any exploration effort. This is true not only for lunar and planetary environments, but also for interplanetary and liquid environments. Such a system would also have myriad direct commercial spin-off applications. The Hybrid Exploration Agent Platform and Sensor Web or HEAP-SW like the ANTS concept is a Sensor Web concept. The HEAP-SW is conceptually and practically a very different system. HEAP-SW is applicable to any environment and a huge range of exploration tasks. It is a very robust, low cost, high return, solution to a complex problem. All of the technology for initial development and implementation is currently available. The HEAP Sensor Web or HEAP-SW consists of three major parts, The Hybrid Exploration Agent Platforms or HEAP, the Sensor Web or SW and the immobile Data collection and Uplink units or DU. The HEAP-SW as a whole will refer to any group of mobile agents or robots where each robot is a mobile data collection unit that spends most of its time acting in concert with all other robots, DUs in the web, and the HEAP-SWs overall Command and Control (CC) system. Each DU and robot is, however, capable of acting independently. The three parts of the HEAP-SW system are discussed in this paper. The Goals of the HEAP-SW system are: 1) To maximize the amount of exploration enhancing science data collected; 2) To minimize data loss due to system malfunctions; 3) To minimize or, possibly, eliminate the risk of total system failure; 4) To minimize the size, weight, and power requirements of each HEAP robot; 5) To minimize HEAP-SW system costs. The rest of this paper discusses how these goals are attained.
Embodied Computation: An Active-Learning Approach to Mobile Robotics Education
ERIC Educational Resources Information Center
Riek, L. D.
2013-01-01
This paper describes a newly designed upper-level undergraduate and graduate course, Autonomous Mobile Robots. The course employs active, cooperative, problem-based learning and is grounded in the fundamental computational problems in mobile robotics defined by Dudek and Jenkin. Students receive a broad survey of robotics through lectures, weekly…
NASA Astrophysics Data System (ADS)
Madokoro, H.; Tsukada, M.; Sato, K.
2013-07-01
This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.
Enhanced control & sensing for the REMOTEC ANDROS Mk VI robot. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Harvey, H.W.
1997-08-01
This Cooperative Research and Development Agreement (CRADA) between Lockheed Marietta Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C.; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments. PMID:28179882
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
Improving semantic scene understanding using prior information
NASA Astrophysics Data System (ADS)
Laddha, Ankit; Hebert, Martial
2016-05-01
Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.
Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-02-21
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.
Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-01-01
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578
NASA Astrophysics Data System (ADS)
Cameron, Jonathan M.; Arkin, Ronald C.
1992-02-01
As mobile robots are used in more uncertain and dangerous environments, it will become important to design them so that they can survive falls. In this paper, we examine a number of mechanisms and strategies that animals use to withstand these potentially catastrophic events and extend them to the design of robots. A brief survey of several aspects of how common cats survive falls provides an understanding of the issues involved in preventing traumatic injury during a falling event. After outlining situations in which robots might fall, a number of factors affecting their survival are described. From this background, several robot design guidelines are derived. These include recommendations for the physical structure of the robot as well as requirements for the robot control architecture. A control architecture is proposed based on reactive control techniques and action-oriented perception that is geared to support this form of survival behavior.
[Service robots in elderly care. Possible application areas and current state of developments].
Graf, B; Heyer, T; Klein, B; Wallhoff, F
2013-08-01
The term "Service robotics" describes semi- or fully autonomous technical systems able to perform services useful to the well-being of humans. Service robots have the potential to support and disburden both persons in need of care as well as nursing care staff. In addition, they can be used in prevention and rehabilitation in order to reduce or avoid the need for help. Products currently available to support people in domestic environments are mainly cleaning or remote-controlled communication robots. Examples of current research activities are the (further) development of mobile robots as advanced communication assistants or the development of (semi) autonomous manipulation aids and multifunctional household assistants. Transport robots are commonly used in many hospitals. In nursing care facilities, the first evaluations have already been made. So-called emotional robots are now sold as products and can be used for therapeutic, occupational, or entertainment activities.
NASA Technical Reports Server (NTRS)
Cameron, Jonathan M.; Arkin, Ronald C.
1992-01-01
As mobile robots are used in more uncertain and dangerous environments, it will become important to design them so that they can survive falls. In this paper, we examine a number of mechanisms and strategies that animals use to withstand these potentially catastrophic events and extend them to the design of robots. A brief survey of several aspects of how common cats survive falls provides an understanding of the issues involved in preventing traumatic injury during a falling event. After outlining situations in which robots might fall, a number of factors affecting their survival are described. From this background, several robot design guidelines are derived. These include recommendations for the physical structure of the robot as well as requirements for the robot control architecture. A control architecture is proposed based on reactive control techniques and action-oriented perception that is geared to support this form of survival behavior.
Exception handling for sensor fusion
NASA Astrophysics Data System (ADS)
Chavez, G. T.; Murphy, Robin R.
1993-08-01
This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.
Localization and Mapping Using Only a Rotating FMCW Radar Sensor
Vivet, Damien; Checchin, Paul; Chapuis, Roland
2013-01-01
Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicle's displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed. PMID:23567523
Mobile tele-echography: user interface design.
Cañero, Cristina; Thomos, Nikolaos; Triantafyllidis, George A; Litos, George C; Strintzis, Michael Gerassimos
2005-03-01
Ultrasound imaging allows the evaluation of the degree of emergency of a patient. However, in some instances, a well-trained sonographer is unavailable to perform such echography. To cope with this issue, the Mobile Tele-Echography Using an Ultralight Robot (OTELO) project aims to develop a fully integrated end-to-end mobile tele-echography system using an ultralight remote-controlled robot for population groups that are not served locally by medical experts. This paper focuses on the user interface of the OTELO system, consisting of the following parts: an ultrasound video transmission system providing real-time images of the scanned area, an audio/video conference to communicate with the paramedical assistant and with the patient, and a virtual-reality environment, providing visual and haptic feedback to the expert, while capturing the expert's hand movements. These movements are reproduced by the robot at the patient site while holding the ultrasound probe against the patient skin. In addition, the user interface includes an image processing facility for enhancing the received images and the possibility to include them into a database.
Localization and mapping using only a rotating FMCW radar sensor.
Vivet, Damien; Checchin, Paul; Chapuis, Roland
2013-04-08
Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicle's displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed.
Determining navigability of terrain using point cloud data.
Cockrell, Stephanie; Lee, Gregory; Newman, Wyatt
2013-06-01
This paper presents an algorithm to identify features of the navigation surface in front of a wheeled robot. Recent advances in mobile robotics have brought about the development of smart wheelchairs to assist disabled people, allowing them to be more independent. These robots have a human occupant and operate in real environments where they must be able to detect hazards like holes, stairs, or obstacles. Furthermore, to ensure safe navigation, wheelchairs often need to locate and navigate on ramps. The algorithm is implemented on data from a Kinect and can effectively identify these features, increasing occupant safety and allowing for a smoother ride.
The RiSE climbing robot: body and leg design
NASA Astrophysics Data System (ADS)
Saunders, A.; Goldman, D. I.; Full, R. J.; Buehler, M.
2006-05-01
The RiSE robot is a biologically inspired, six legged climbing robot, designed for general mobility in scansorial (vertical walls, horizontal ledges, ground level) environments. It exhibits ground reaction forces that are similar to animal climbers and does not rely on suction, magnets or other surface-dependent specializations to achieve adhesion and shear force. We describe RiSE's body and leg design as well as its electromechanical, communications and computational infrastructure. We review design iterations that enable RiSE to climb 90° carpeted, cork covered and (a growing range of) stucco surfaces in the quasi-static regime.
Unified Approach To Control Of Motions Of Mobile Robots
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1995-01-01
Improved computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Present scheme similar to one described in "Coordinated Control of Mobile Robotic Manipulators" (NPO-19109). Both schemes based on configuration-control formalism. Present one incorporates explicit distinction between holonomic and nonholonomic constraints. Several other prior articles in NASA Tech Briefs discussed aspects of configuration-control formalism. These include "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes with Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).
A Mobile Robot for Locomotion Through a 3D Periodic Lattice Environment
NASA Technical Reports Server (NTRS)
Jenett, Benjamin; Cellucci, Daniel; Cheung, Kenneth
2017-01-01
This paper describes a novel class of robots specifically adapted to climb periodic lattices, which we call 'Relative Robots'. These robots use the regularity of the structure to simplify the path planning, align with minimal feedback, and reduce the number of degrees of freedom (DOF) required to locomote. They can perform vital inspection and repair tasks within the structure that larger truss construction robots could not perform without modifying the structure. We detail a specific type of relative robot designed to traverse a cuboctahedral (CubOct) cellular solids lattice, show how the symmetries of the lattice simplify the design, and test these design methodologies with a CubOct relative robot that traverses a 76.2 mm (3 in.) pitch lattice, MOJO (Multi-Objective JOurneying robot). We perform three locomotion tasks with MOJO: vertical climbing, horizontal climbing, and turning, and find that, due to changes in the orientation of the robot relative to the gravity vector, the success rate of vertical and horizontal climbing is significantly different.
Modular Track System For Positioning Mobile Robots
NASA Technical Reports Server (NTRS)
Miller, Jeff
1995-01-01
Conceptual system for positioning mobile robotic manipulators on large main structure includes modular tracks and ancillary structures assembled easily along with main structure. System, called "tracked robotic location system" (TROLS), originally intended for application to platforms in outer space, but TROLS concept might also prove useful on Earth; for example, to position robots in factories and warehouses. T-cross-section rail keeps mobile robot on track. Bar codes mark locations along track. Each robot equipped with bar-code-recognizing circuitry so it quickly finds way to assigned location.
Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, Francois G.
2002-06-01
Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less
Quantum robots and environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.
1998-08-01
Quantum robots and their interactions with environments of quantum systems are described, and their study justified. A quantum robot is a mobile quantum system that includes an on-board quantum computer and needed ancillary systems. Quantum robots carry out tasks whose goals include specified changes in the state of the environment, or carrying out measurements on the environment. Each task is a sequence of alternating computation and action phases. Computation phase activites include determination of the action to be carried out in the next phase, and recording of information on neighborhood environmental system states. Action phase activities include motion of themore » quantum robot and changes in the neighborhood environment system states. Models of quantum robots and their interactions with environments are described using discrete space and time. A unitary step operator T that gives the single time step dynamics is associated with each task. T=T{sub a}+T{sub c} is a sum of action phase and computation phase step operators. Conditions that T{sub a} and T{sub c} should satisfy are given along with a description of the evolution as a sum over paths of completed phase input and output states. A simple example of a task{emdash}carrying out a measurement on a very simple environment{emdash}is analyzed in detail. A decision tree for the task is presented and discussed in terms of the sums over phase paths. It is seen that no definite times or durations are associated with the phase steps in the tree, and that the tree describes the successive phase steps in each path in the sum over phase paths. {copyright} {ital 1998} {ital The American Physical Society}« less
A tesselated probabilistic representation for spatial robot perception and navigation
NASA Technical Reports Server (NTRS)
Elfes, Alberto
1989-01-01
The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations.
Endo, Gen; Iemura, Yu; Fukushima, Edwardo F; Hirose, Shigeo; Iribe, Masatsugu; Ikeda, Ryota; Onishi, Kohei; Maeda, Naoto; Takubo, Toshio; Ohira, Mineko
2013-06-01
Home oxygen therapy (HOT) is a medical treatment for the patients suffering from severe lung diseases. Although walking outdoors is recommended for the patients to maintain physical strength, the patients always have to carry a portable oxygen supplier which is not sufficiently light weight for this purpose. Our ultimate goal is to develop a mobile robot to carry an oxygen tank and follow a patient in an urban outdoor environment. We have proposed a mobile robot with a tether interface to detect the relative position of the foregoing patient. In this paper, we report the questionnaire-based evaluation about the two developed prototypes by the HOT patients. We conduct maneuvering experiments, and then obtained questionnaire-based evaluations from the 20 patients. The results show that the basic following performance is sufficient and the pulling force of the tether is sufficiently small for the patients. Moreover, the patients prefer the small-sized prototype for compactness and light weight to the middle-sized prototype which can carry larger payload. We also obtained detailed requests to improve the robots. Finally the results show the general concept of the robot is favorably received by the patients.
Optimal control of 2-wheeled mobile robot at energy performance index
NASA Astrophysics Data System (ADS)
Kaliński, Krzysztof J.; Mazur, Michał
2016-03-01
The paper presents the application of the optimal control method at the energy performance index towards motion control of the 2-wheeled mobile robot. With the use of the proposed method of control the 2-wheeled mobile robot can realise effectively the desired trajectory. The problem of motion control of mobile robots is usually neglected and thus performance of the realisation of the high level control tasks is limited.
NASA Astrophysics Data System (ADS)
Shah, Hitesh K.; Bahl, Vikas; Martin, Jason; Flann, Nicholas S.; Moore, Kevin L.
2002-07-01
In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) have been funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). One among the several out growths of this work has been the development of a grammar-based approach to intelligent behavior generation for commanding autonomous robotic vehicles. In this paper we describe the use of this grammar for enabling autonomous behaviors. A supervisory task controller (STC) sequences high-level action commands (taken from the grammar) to be executed by the robot. It takes as input a set of goals and a partial (static) map of the environment and produces, from the grammar, a flexible script (or sequence) of the high-level commands that are to be executed by the robot. The sequence is derived by a planning function that uses a graph-based heuristic search (A* -algorithm). Each action command has specific exit conditions that are evaluated by the STC following each task completion or interruption (in the case of disturbances or new operator requests). Depending on the system's state at task completion or interruption (including updated environmental and robot sensor information), the STC invokes a reactive response. This can include sequencing the pending tasks or initiating a re-planning event, if necessary. Though applicable to a wide variety of autonomous robots, an application of this approach is demonstrated via simulations of ODIS, an omni-directional inspection system developed for security applications.
BatSLAM: Simultaneous localization and mapping using biomimetic sonar.
Steckel, Jan; Peremans, Herbert
2013-01-01
We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building.
BatSLAM: Simultaneous Localization and Mapping Using Biomimetic Sonar
Steckel, Jan; Peremans, Herbert
2013-01-01
We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building. PMID:23365647
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Controlling the autonomy of a reconnaissance robot
NASA Astrophysics Data System (ADS)
Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David
2004-09-01
In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU
Dou, Lihua; Su, Zhong; Liu, Ning
2018-01-01
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515
Reasoning and planning in dynamic domains: An experiment with a mobile robot
NASA Technical Reports Server (NTRS)
Georgeff, M. P.; Lansky, A. L.; Schoppers, M. J.
1987-01-01
Progress made toward having an autonomous mobile robot reason and plan complex tasks in real-world environments is described. To cope with the dynamic and uncertain nature of the world, researchers use a highly reactive system to which is attributed attitudes of belief, desire, and intention. Because these attitudes are explicitly represented, they can be manipulated and reasoned about, resulting in complex goal-directed and reflective behaviors. Unlike most planning systems, the plans or intentions formed by the system need only be partly elaborated before it decides to act. This allows the system to avoid overly strong expectations about the environment, overly constrained plans of action, and other forms of over-commitment common to previous planners. In addition, the system is continuously reactive and has the ability to change its goals and intentions as situations warrant. Thus, while the system architecture allows for reasoning about means and ends in much the same way as traditional planners, it also posseses the reactivity required for survival in complex real-world domains. The system was tested using SRI's autonomous robot (Flakey) in a scenario involving navigation and the performance of an emergency task in a space station scenario.
HERMIES-I: a mobile robot for navigation and manipulation experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.; Barhen, J.; de Saussure, G.
1985-01-01
The purpose of this paper is to report the current status of investigations ongoing at the Center for Engineering Systems Advanced Research (CESAR) in the areas of navigation and manipulation in unstructured environments. The HERMIES-I mobile robot, a prototype of a series which contains many of the major features needed for remote work in hazardous environments is discussed. Initial experimental work at CESAR has begun in the area of navigation. It briefly reviews some of the ongoing research in autonomous navigation and describes initial research with HERMIES-I and associated graphic simulation. Since the HERMIES robots will generally be composed ofmore » a variety of asynchronously controlled hardware components (such as manipulator arms, digital image sensors, sonars, etc.) it seems appropriate to consider future development of the HERMIES brain as a hypercube ensemble machine with concurrent computation and associated message passing. The basic properties of such a hypercube architecture are presented. Decision-making under uncertainty eventually permeates all of our work. Following a survey of existing analytical approaches, it was decided that a stronger theoretical basis is required. As such, this paper presents the framework for a recently developed hybrid uncertainty theory. 21 refs., 2 figs.« less
Smart Rotorcraft Field Assistants for Terrestrial and Planetary Science
NASA Technical Reports Server (NTRS)
Young, Larry A.; Aiken, Edwin W.; Briggs, Geoffrey A.
2004-01-01
Field science in extreme terrestrial environments is often difficult and sometimes dangerous. Field seasons are also often short in duration. Robotic field assistants, particularly small highly mobile rotary-wing platforms, have the potential to significantly augment a field season's scientific return on investment for geology and astrobiology researchers by providing an entirely new suite of sophisticated field tools. Robotic rotorcraft and other vertical lift planetary aerial vehicle also hold promise for supporting planetary science missions.
Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben
2013-01-01
Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification. PMID:24351647
Souto, Leonardo A V; Castro, André; Gonçalves, Luiz Marcos Garcia; Nascimento, Tiago P
2017-08-08
Natural landmarks are the main features in the next step of the research in localization of mobile robot platforms. The identification and recognition of these landmarks are crucial to better localize a robot. To help solving this problem, this work proposes an approach for the identification and recognition of natural marks included in the environment using images from RGB-D (Red, Green, Blue, Depth) sensors. In the identification step, a structural analysis of the natural landmarks that are present in the environment is performed. The extraction of edge points of these landmarks is done using the 3D point cloud obtained from the RGB-D sensor. These edge points are smoothed through the S l 0 algorithm, which minimizes the standard deviation of the normals at each point. Then, the second step of the proposed algorithm begins, which is the proper recognition of the natural landmarks. This recognition step is done as a real-time algorithm that extracts the points referring to the filtered edges and determines to which structure they belong to in the current scenario: stairs or doors. Finally, the geometrical characteristics that are intrinsic to the doors and stairs are identified. The approach proposed here has been validated with real robot experiments. The performed tests verify the efficacy of our proposed approach.
Castro, André; Nascimento, Tiago P.
2017-01-01
Natural landmarks are the main features in the next step of the research in localization of mobile robot platforms. The identification and recognition of these landmarks are crucial to better localize a robot. To help solving this problem, this work proposes an approach for the identification and recognition of natural marks included in the environment using images from RGB-D (Red, Green, Blue, Depth) sensors. In the identification step, a structural analysis of the natural landmarks that are present in the environment is performed. The extraction of edge points of these landmarks is done using the 3D point cloud obtained from the RGB-D sensor. These edge points are smoothed through the Sl0 algorithm, which minimizes the standard deviation of the normals at each point. Then, the second step of the proposed algorithm begins, which is the proper recognition of the natural landmarks. This recognition step is done as a real-time algorithm that extracts the points referring to the filtered edges and determines to which structure they belong to in the current scenario: stairs or doors. Finally, the geometrical characteristics that are intrinsic to the doors and stairs are identified. The approach proposed here has been validated with real robot experiments. The performed tests verify the efficacy of our proposed approach. PMID:28786925
Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben
2013-12-17
Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification.
Mobile robotics research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morse, W.D.
Sandia is a National Security Laboratory providing scientific and engineering solutions to meet national needs for both government and industry. As part of this mission, the Intelligent Systems and Robotics Center conducts research and development in robotics and intelligent machine technologies. An overview of Sandia`s mobile robotics research is provided. Recent achievements and future directions in the areas of coordinated mobile manipulation, small smart machines, world modeling, and special application robots are presented.
Efficient Symbolic Task Planning for Multiple Mobile Robots
2016-12-13
Efficient Symbolic Task Planning for Multiple Mobile Robots Yuqian Jiang December 13, 2016 Abstract Symbolic task planning enables a robot to make...high-level deci- sions toward a complex goal by computing a sequence of actions with minimum expected costs. This thesis builds on a single- robot ...time complexity of optimal planning for multiple mobile robots . In this thesis we first investigate the performance of the state-of-the-art solvers of
NASA Astrophysics Data System (ADS)
Schubert, Oliver J.; Tolle, Charles R.
2004-09-01
Over the last decade the world has seen numerous autonomous vehicle programs. Wheels and track designs are the basis for many of these vehicles. This is primarily due to four main reasons: a vast preexisting knowledge base for these designs, energy efficiency of power sources, scalability of actuators, and the lack of control systems technologies for handling alternate highly complex distributed systems. Though large efforts seek to improve the mobility of these vehicles, many limitations still exist for these systems within unstructured environments, e.g. limited mobility within industrial and nuclear accident sites where existing plant configurations have been extensively changed. These unstructured operational environments include missions for exploration, reconnaissance, and emergency recovery of objects within reconfigured or collapsed structures, e.g. bombed buildings. More importantly, these environments present a clear and present danger for direct human interactions during the initial phases of recovery operations. Clearly, the current classes of autonomous vehicles are incapable of performing in these environments. Thus the next generation of designs must include highly reconfigurable and flexible autonomous robotic platforms. This new breed of autonomous vehicles will be both highly flexible and environmentally adaptable. Presented in this paper is one of the most successful designs from nature, the snake-eel-worm (SEW). This design implements shape memory alloy (SMA) actuators which allow for scaling of the robotic SEW designs from sub-micron scale to heavy industrial implementations without major conceptual redesigns as required in traditional hydraulic, pneumatic, or motor driven systems. Autonomous vehicles based on the SEW design posses the ability to easily move between air based environments and fluid based environments with limited or no reconfiguration. Under a SEW designed vehicle, one not only achieves vastly improved maneuverability within a highly unstructured environment, but also gains robotic manipulation abilities, normally relegated as secondary add-ons within existing vehicles, all within one small condensed package. The prototype design presented includes a Beowulf style computing system for advanced guidance calculations and visualization computations. All of the design and implementation pertaining to the SEW robot discussed in this paper is the product of a student team under the summer fellowship program at the DOEs INEEL.
Time response for sensor sensed to actuator response for mobile robotic system
NASA Astrophysics Data System (ADS)
Amir, N. S.; Shafie, A. A.
2017-11-01
Time and performance of a mobile robot are very important in completing the tasks given to achieve its ultimate goal. Tasks may need to be done within a time constraint to ensure smooth operation of a mobile robot and can result in better performance. The main purpose of this research was to improve the performance of a mobile robot so that it can complete the tasks given within time constraint. The problem that is needed to be solved is to minimize the time interval between sensor detection and actuator response. The research objective is to analyse the real time operating system performance of sensors and actuators on one microcontroller and on two microcontroller for a mobile robot. The task for a mobile robot for this research is line following with an obstacle avoidance. Three runs will be carried out for the task and the time between the sensors senses to the actuator responses were recorded. Overall, the results show that two microcontroller system have better response time compared to the one microcontroller system. For this research, the average difference of response time is very important to improve the internal performance between the occurrence of a task, sensors detection, decision making and actuator response of a mobile robot. This research helped to develop a mobile robot with a better performance and can complete task within the time constraint.
Training toddlers seated on mobile robots to drive indoors amidst obstacles.
Chen, Xi; Ragonesi, Christina; Galloway, James C; Agrawal, Sunil K
2011-06-01
Mobility is a causal factor in development. Children with mobility impairments may rely upon power mobility for independence and thus require advanced driving skills to function independently. Our previous studies show that while infants can learn to drive directly to a goal using conventional joysticks in several months of training, they are unable in this timeframe to acquire the advanced skill to avoid obstacles while driving. Without adequate driving training, children are unable to explore the environment safely, the consequences of which may in turn increase their risk for developmental delay. The goal of this research therefore is to train children seated on mobile robots to purposefully and safely drive indoors. In this paper, we present results where ten typically-developing toddlers are trained to drive a robot within an obstacle course. We also report a case study with a toddler with spina-bifida who cannot independently walk. Using algorithms based on artificial potential fields to avoid obstacles, we create force field on the joystick that trains the children to navigate while avoiding obstacles. In this "assist-as-needed" approach, if the child steers the joystick outside a force tunnel centered on the desired direction, the driver experiences a bias force on the hand. Our results suggest that the use of a force-feedback joystick may yield faster learning than the use of a conventional joystick.
NASA Technical Reports Server (NTRS)
Boston, Penelope J.
2016-01-01
The search for life and its study is known as astrobiology. Conducting that search on other planets in our Solar System is a major goal of NASA and other space agencies, and a driving passion of the community of scientists and engineers around the world. We practice for that search in many ways, from exploring and studying extreme environments on Earth, to developing robots to go to other planets and help us look for any possible life that may be there or may have been there in the past. The unique challenges of space exploration make collaborations between robots and humans essential. The products of those collaborations will be novel and driven by the features of wholly new environments. For space and planetary environments that are intolerable for humans or where humans present an unacceptable risk to possible biologically sensitive sites, autonomous robots or telepresence offer excellent choices. The search for life signs on Mars fits within this category, especially in advance of human landed missions there, but also as assistants and tools once humans reach the Red Planet. For planetary destinations where we do not envision humans ever going in person, like bitterly cold icy moons, or ocean worlds with thick ice roofs that essentially make them planetary-sized ice caves, we will rely on robots alone to visit those environments for us and enable us to explore and understand any life that we may find there. Current generation robots are not quite ready for some of the tasks that we need them to do, so there are many opportunities for roboticists of the future to advance novel types of mobility, autonomy, and bio-inspired robotic designs to help us accomplish our astrobiological goals. We see an exciting partnership between robotics and astrobiology continually strengthening as we jointly pursue the quest to find extraterrestrial life.
Mobile wireless network for the urban environment
NASA Astrophysics Data System (ADS)
Budulas, Peter; Luu, Brian; Gopaul, Richard
2005-05-01
As the Army transforms into the Future Force, particular attention must be paid to operations in Complex and Urban Terrain. Our adversaries increasingly draw us into operations in the urban environment and one can presume that this trend will continue in future battles. In order to ensure that the United States Army maintains battlefield dominance, the Army Research Laboratory (ARL) is developing technology to equip our soldiers for the urban operations of the future. Sophisticated soldier borne systems will extend sensing to the individual soldier, and correspondingly, allow the soldier to establish an accurate picture of their surrounding environment utilizing information from local and remote assets. Robotic platforms will be an integral part of the future combat team. These platforms will augment the team with remote sensing modalities, task execution capabilities, and enhanced communication systems. To effectively utilize the products provided by each of these systems, collected data must be exchanged in real time to all affected entities. Therefore, the Army Research Laboratory is also developing the technology that will be required to support high bandwidth mobile communication in urban environments. This technology incorporates robotic systems that will allow connectivity in areas unreachable by traditional systems. This paper will address some of the issues of providing wireless connectivity in complex and urban terrain. It will further discuss approaches developed by the Army Research Laboratory to integrate communications capabilities into soldier and robotic systems and provide seamless connectivity between the elements of a combat team, and higher echelons.
A novel method of robot location using RFID and stereo vision
NASA Astrophysics Data System (ADS)
Chen, Diansheng; Zhang, Guanxin; Li, Zhen
2012-04-01
This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.
Controlling robots in the home: Factors that affect the performance of novice robot operators.
McGinn, Conor; Sena, Aran; Kelly, Kevin
2017-11-01
For robots to successfully integrate into everyday life, it is important that they can be effectively controlled by laypeople. However, the task of manually controlling mobile robots can be challenging due to demanding cognitive and sensorimotor requirements. This research explores the effect that the built environment has on the manual control of domestic service robots. In this study, a virtual reality simulation of a domestic robot control scenario was developed. The performance of fifty novice users was evaluated, and their subjective experiences recorded through questionnaires. Through quantitative and qualitative analysis, it was found that untrained operators frequently perform poorly at navigation-based robot control tasks. The study found that passing through doorways accounted for the largest number of collisions, and was consistently identified as a very difficult operation to perform. These findings suggest that homes and other human-orientated settings present significant challenges to robot control. Copyright © 2017 Elsevier Ltd. All rights reserved.
Physics and Robotic Sensing -- the good, the bad, and approaches to making it work
NASA Astrophysics Data System (ADS)
Huff, Brian
2011-03-01
All of the technological advances that have benefited consumer electronics have direct application to robotics. Technological advances have resulted in the dramatic reduction in size, cost, and weight of computing systems, while simultaneously doubling computational speed every eighteen months. The same manufacturing advancements that have enabled this rapid increase in computational power are now being leveraged to produce small, powerful and cost-effective sensing technologies applicable for use in mobile robotics applications. Despite the increase in computing and sensing resources available to today's robotic systems developers, there are sensing problems typically found in unstructured environments that continue to frustrate the widespread use of robotics and unmanned systems. This talk presents how physics has contributed to the creation of the technologies that are making modern robotics possible. The talk discusses theoretical approaches to robotic sensing that appear to suffer when they are deployed in the real world. Finally the author presents methods being used to make robotic sensing more robust.
Robust performance of multiple tasks by a mobile robot
NASA Technical Reports Server (NTRS)
Beckerman, Martin; Barnett, Deanna L.; Dickens, Mike; Weisbin, Charles R.
1989-01-01
While there have been many successful mobile robot experiments, only a few papers have addressed issues pertaining to the range of applicability, or robustness, of robotic systems. The purpose of this paper is to report results of a series of benchmark experiments done to determine and quantify the robustness of an integrated hardware and software system of a mobile robot.
Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.
Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321
Design and Control of Compliant Tensegrity Robots Through Simulation and Hardware Validation
NASA Technical Reports Server (NTRS)
Caluwaerts, Ken; Despraz, Jeremie; Iscen, Atil; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; Sunspiral, Vytas
2014-01-01
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center has developed and validated two different software environments for the analysis, simulation, and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ("tensile-integrity") structures have unique physical properties which make them ideal for interaction with uncertain environments. Yet these characteristics, such as variable structural compliance, and global multi-path load distribution through the tension network, make design and control of bio-inspired tensegrity robots extremely challenging. This work presents the progress in using these two tools in tackling the design and control challenges. The results of this analysis includes multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures. The current hardware prototype of a six-bar tensegrity, code-named ReCTeR, is presented in the context of this validation.
Enhanced control and sensing for the REMOTEC ANDROS Mk VI robot. CRADA final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Harvey, H.W.
1998-08-01
This Cooperative Research and Development Agreement (CRADA) between Lockheed Martin Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less
Mamdani Fuzzy System for Indoor Autonomous Mobile Robot
NASA Astrophysics Data System (ADS)
Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.
2011-06-01
Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.
A biologically inspired meta-control navigation system for the Psikharpax rat robot.
Caluwaerts, K; Staffa, M; N'Guyen, S; Grand, C; Dollé, L; Favre-Félix, A; Girard, B; Khamassi, M
2012-06-01
A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e.g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics.
Design and control of compliant tensegrity robots through simulation and hardware validation
Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas
2014-01-01
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity (‘tensile–integrity’) structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. PMID:24990292
Nurse's Aid And Housekeeping Mobile Robot For Use In The Nursing Home Workplace
NASA Astrophysics Data System (ADS)
Sines, John A.
1987-01-01
The large nursing home market has several natural characteristics which make it a good applications area for robotics. The environment is already robot accessible and the work functions require large quantities of low skilled services on a daily basis. In the near future, a commercial opportunity for the practical application of robots is emerging in the delivery of housekeeping services in the nursing home environment. The robot systems will assist in food tray delivery, material handling, and security, and will perform activities such as changing a resident's table side drinking water twice a day, and taking out the trash. The housekeeping work functions will generate cost savings of approximately 22,000 per year, at a cost of 6,000 per year. Technical system challenges center around the artificial intelligence required for the robot to map its own location within the facility, to find objects, and to avoid obstacles, and the development of an energy efficient mechanical lifting system. The long engineering and licensing cycles (7 to 12 years) required to bring this type of product to market make it difficult to raise capital for such a venture.
Robotic Companions for Older People: A Case Study in the Wild.
Doering, Nicola; Richter, Katja; Gross, Horst-Michael; Schroeter, Christof; Mueller, Steffen; Volkhardt, Michael; Scheidig, Andrea; Debes, Klaus
2015-01-01
Older people tend to have difficulties using unknown technical devices and are less willing to accept technical shortcomings. Therefore, a robot that is supposed to support older people in managing daily life has to adapt to the users' needs and capabilities that are very heterogeneous within the target group. The aim of the presented case study was to provide in-depth insights on individual usage patterns and acceptance of a mobile service robot in real live environments (i.e. in the users' homes). Results from three cases (users aged 67, 78 and 85 living in their own apartments) are reported. Findings on usability and user experience illustrate that the robot has considerable potential to be accepted to support daily living at home.
Smart mobile robot system for rubbish collection
NASA Astrophysics Data System (ADS)
Ali, Mohammed A. H.; Sien Siang, Tan
2018-03-01
This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.
Modeling the Environment of a Mobile Security Robot
1990-06-01
representation of free space, in that the making the robot smart enough to distinguish research is oriented towards operation within between actual and...detection are not as smart as we like to think we are. of residual echoes by the unext sensor to be Something obviously was amiss, yet what could 18...for signal voltages in excess of 1 microvolt, with a rated transmit output power of 40 FLS RE UTD E OFUS U E MNHSE OUT A B FE N UART BUFFERDAAI IN
Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał
2016-01-01
Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper. PMID:27649186
Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał
2016-09-14
Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper.
Cacucciolo, Vito; Shigemune, Hiroki; Cianchetti, Matteo; Laschi, Cecilia; Maeda, Shingo
2017-09-01
Electrohydrodynamics (EHD) refers to the direct conversion of electrical energy into mechanical energy of a fluid. Through the use of mobile electrodes, this principle is exploited in a novel fashion for designing and testing a millimeter-scale untethered robot, which is powered harvesting the energy from an external electric field. The robot is designed as an inverted sail-boat, with the thrust generated on the sail submerged in the liquid. The diffusion constant of the robot is experimentally computed, proving that its movement is not driven by thermal fluctuations, and then its kinematic and dynamic responses are characterized for different applied voltages. The results show the feasibility of using EHD with mobile electrodes for powering untethered robots and provide new evidences for the further development of this actuation system for both mobile robots and compliant actuators in soft robotics.
Parallel-distributed mobile robot simulator
NASA Astrophysics Data System (ADS)
Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo
1996-06-01
The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.
Shakya, Yuniya; Johnson, Michelle J
2008-01-01
Robot assisted therapy is a new and promising area in stroke rehabilitation and has shown to be effective in reducing motor impairment, but is a costly solution for home rehabilitation. High medical costs could be reduced if we could improve rehabilitation exercise in unsupervised environments such as the home. Hence, there is an augmented need for a cost effective rehabilitation system that can be used outside the clinic. This paper presents the design concept for an autonomous robotic assistant that is low-cost and effective in engaging the users while assisting them with therapy in any under-supervised area. We investigated how the robot assistant can support TheraDrive, our low-cost therapy system. We present the design methods and a case study demonstrating the arm and video collection system.
Advanced wireless mobile collaborative sensing network for tactical and strategic missions
NASA Astrophysics Data System (ADS)
Xu, Hao
2017-05-01
In this paper, an advanced wireless mobile collaborative sensing network will be developed. Through properly combining wireless sensor network, emerging mobile robots and multi-antenna sensing/communication techniques, we could demonstrate superiority of developed sensing network. To be concrete, heterogeneous mobile robots including unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) are equipped with multi-model sensors and wireless transceiver antennas. Through real-time collaborative formation control, multiple mobile robots can team the best formation that can provide most accurate sensing results. Also, formatting multiple mobile robots can also construct a multiple-input multiple-output (MIMO) communication system that can provide a reliable and high performance communication network.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
A positioning system with no line-of-sight restrictions for cluttered environments
NASA Astrophysics Data System (ADS)
Prigge, Eric A.
Accurate sensing of vehicle location and attitude is a fundamental requirement in many mobile-robot applications, but is a very challenging problem in the cluttered and unstructured environment of the real world. Many existing indoor positioning systems are limited in workspace and robustness because they require clear lines of sight or do not provide absolute, drift-free measurements. Examples include overhead vision systems, where an unobstructed view must be maintained between robot and camera, and inertial systems, where the measurements drift over time. The research presented in this dissertation provides a new location- and attitude-sensing system designed specifically to meet the challenges of operation in a realistic, cluttered indoor environment, such as that of an office building or warehouse. The system is not limited by line-of-sight restrictions and produces drift-free measurements throughout a three-dimensional operating volume that can span a large building. Accuracy of several centimeters and a few degrees is delivered at 10 Hz, and any number of the small sensor units can be in operation, all providing estimates in a common reference frame. This positioning system is based on extremely-low-frequency magnetic fields, which have excellent characteristics for penetrating line-of-sight obstructions. Beacons located throughout the workspace create the low-level fields. A sensor unit on the mobile robot samples the local magnetic field and processes the measurements to determine its location and attitude. This research overcomes limitations in existing magnetic-based systems. The design of the signal structure, based on pseudorandom codes, enables the use of multiple, distributed L-beacons and greatly expands coverage volume. The development of real-time identification and correction methods mitigates the impact of distortions caused by materials in the environment. A novel solution algorithm combats both challenges, providing increased coverage volume and reduced sensitivity to materials. This dissertation examines the concept for the system, the challenges encountered during its development, the research solutions that enable the system, the design of a prototype, and results from experimental demonstrations. The positioning system developed through this research provides an effective solution not only for mobile robots navigating cluttered environments, but has application in other areas such as object tracking, augmented reality, and construction.
JOMAR: Joint Operations with Mobile Autonomous Robots
2015-12-21
AFRL-AFOSR-JP-TR-2015-0009 JOMAR: Joint Operations with Mobile Autonomous Robots Edwin Olson UNIVERSITY OF MICHIGAN Final Report 12/21/2015...SUBTITLE JOMAR: Joint Operations with Mobile Autonomous Robots 5a. CONTRACT NUMBER FA23861114024 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...14. ABSTRACT Under this grant, we formulated and implemented a variety of novel algorithms that address core problems in multi- robot systems. These
ANYmal - A Highly Mobile and Dynamic Quadrupedal Robot
2016-10-09
ANYmal - A Highly Mobile and Dynamic Quadrupedal Robot * Marco Hutter1, Christian Gehring2, Dominic Jud1, Andreas Lauber1, C. Dario Bellicoso1...Abstract— This paper introduces ANYmal, a quadrupedal robot that features outstanding mobility and dynamic motion capability. Thanks to novel...compliant joint modules with integrated electronics, the 30 kg, 0.5 m tall robotic dog is torque controllable and very robust against impulsive loads during
NASA Astrophysics Data System (ADS)
Hendzel, Z.; Rykała, Ł.
2017-02-01
The work presents the dynamic equations of motion of a wheeled mobile robot with mecanum wheels derived with the use of Lagrange equations of the second kind. Mecanum wheels are a new type of wheels used in wheeled mobile robots and they consist of freely rotating rollers attached to the circumference of the wheels. In order to derive dynamic equations of motion of a wheeled mobile robot, the kinetic energy of the system is determined, as well as the generalised forces affecting the system. The resulting mathematical model of a wheeled mobile robot was generated with the use of Maple V software. The results of a solution of inverse and forward problems of dynamics of the discussed object are also published.
Developing Autonomy for Unmanned Surface Vehicles by Using Virtual Environments
2010-10-11
successfully evolved for a wide variety of behaviors as obstacle avoidance (Barate and Manzanera 2007; Nehmzow 2002), wall-following ( Dain 1998...Advances in unmanned marine vehicles pp 311-328 Dain R (1998) Developing mobile robot wall-following algorithms using ge- netic programming. Applied
Decentralized reinforcement-learning control and emergence of motion patterns
NASA Astrophysics Data System (ADS)
Svinin, Mikhail; Yamada, Kazuyaki; Okhura, Kazuhiro; Ueda, Kanji
1998-10-01
In this paper we propose a system for studying emergence of motion patterns in autonomous mobile robotic systems. The system implements an instance-based reinforcement learning control. Three spaces are of importance in formulation of the control scheme. They are the work space, the sensor space, and the action space. Important feature of our system is that all these spaces are assumed to be continuous. The core part of the system is a classifier system. Based on the sensory state space analysis, the control is decentralized and is specified at the lowest level of the control system. However, the local controllers are implicitly connected through the perceived environment information. Therefore, they constitute a dynamic environment with respect to each other. The proposed control scheme is tested under simulation for a mobile robot in a navigation task. It is shown that some patterns of global behavior--such as collision avoidance, wall-following, light-seeking--can emerge from the local controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
EISLER, G. RICHARD
This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Robust Planning for Autonomous Navigation of Mobile Robots In Unstructured, Dynamic Environments (AutoNav)''. The project goal was to develop an algorithmic-driven, multi-spectral approach to point-to-point navigation characterized by: segmented on-board trajectory planning, self-contained operation without human support for mission duration, and the development of appropriate sensors and algorithms to navigate unattended. The project was partially successful in achieving gains in sensing, path planning, navigation, and guidance. One of three experimental platforms, the Minimalist Autonomous Testbed, used a repetitive sense-and-re-plan combination to demonstratemore » the majority of elements necessary for autonomous navigation. However, a critical goal for overall success in arbitrary terrain, that of developing a sensor that is able to distinguish true obstacles that need to be avoided as a function of vehicle scale, still needs substantial research to bring to fruition.« less
On-Line Method and Apparatus for Coordinated Mobility and Manipulation of Mobile Robots
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1996-01-01
A simple and computationally efficient approach is disclosed for on-line coordinated control of mobile robots consisting of a manipulator arm mounted on a mobile base. The effect of base mobility on the end-effector manipulability index is discussed. The base mobility and arm manipulation degrees-of-freedom are treated equally as the joints of a kinematically redundant composite robot. The redundancy introduced by the mobile base is exploited to satisfy a set of user-defined additional tasks during the end-effector motion. A simple on-line control scheme is proposed which allows the user to assign weighting factors to individual degrees-of-mobility and degrees-of-manipulation, as well as to each task specification. The computational efficiency of the control algorithm makes it particularly suitable for real-time implementations. Four case studies are discussed in detail to demonstrate the application of the coordinated control scheme to various mobile robots.
GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments.
Monroy, Javier; Hernandez-Bennets, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier
2017-06-23
This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment.
GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments
Hernandez-Bennetts, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier
2017-01-01
This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment. PMID:28644375
Shigemune, Hiroki; Cianchetti, Matteo; Laschi, Cecilia
2017-01-01
Electrohydrodynamics (EHD) refers to the direct conversion of electrical energy into mechanical energy of a fluid. Through the use of mobile electrodes, this principle is exploited in a novel fashion for designing and testing a millimeter‐scale untethered robot, which is powered harvesting the energy from an external electric field. The robot is designed as an inverted sail‐boat, with the thrust generated on the sail submerged in the liquid. The diffusion constant of the robot is experimentally computed, proving that its movement is not driven by thermal fluctuations, and then its kinematic and dynamic responses are characterized for different applied voltages. The results show the feasibility of using EHD with mobile electrodes for powering untethered robots and provide new evidences for the further development of this actuation system for both mobile robots and compliant actuators in soft robotics. PMID:28932659
Robot Tracking of Human Subjects in Field Environments
NASA Technical Reports Server (NTRS)
Graham, Jeffrey; Shillcutt, Kimberly
2003-01-01
Future planetary exploration will involve both humans and robots. Understanding and improving their interaction is a main focus of research in the Intelligent Systems Branch at NASA's Johnson Space Center. By teaming intelligent robots with astronauts on surface extra-vehicular activities (EVAs), safety and productivity can be improved. The EVA Robotic Assistant (ERA) project was established to study the issues of human-robot teams, to develop a testbed robot to assist space-suited humans in exploration tasks, and to experimentally determine the effectiveness of an EVA assistant robot. A companion paper discusses the ERA project in general, its history starting with ASRO (Astronaut-Rover project), and the results of recent field tests in Arizona. This paper focuses on one aspect of the research, robot tracking, in greater detail: the software architecture and algorithms. The ERA robot is capable of moving towards and/or continuously following mobile or stationary targets or sequences of targets. The contributions made by this research include how the low-level pose data is assembled, normalized and communicated, how the tracking algorithm was generalized and implemented, and qualitative performance reports from recent field tests.
Live video monitoring robot controlled by web over internet
NASA Astrophysics Data System (ADS)
Lokanath, M.; Akhil Sai, Guruju
2017-11-01
Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.
A tracked robot with novel bio-inspired passive "legs".
Sun, Bo; Jing, Xingjian
2017-01-01
For track-based robots, an important aspect is the suppression design, which determines the trafficability and comfort of the whole system. The trafficability limits the robot's working capability, and the riding comfort limits the robot's working effectiveness, especially with some sensitive instruments mounted on or operated. To these aims, a track-based robot equipped with a novel passive bio-inspired suspension is designed and studied systematically in this paper. Animal or insects have very special leg or limb structures which are good for motion control and adaptable to different environments. Inspired by this, a new track-based robot is designed with novel "legs" for connecting the loading wheels to the robot body. Each leg is designed with passive structures and can achieve very high loading capacity but low dynamic stiffness such that the robot can move on rough ground similar to a multi-leg animal or insect. Therefore, the trafficability and riding comfort can be significantly improved without losing loading capacity. The new track-based robot can be well applied to various engineering tasks for providing a stable moving platform of high mobility, better trafficability and excellent loading capacity.
Control solutions for robots using Android and iOS devices
NASA Astrophysics Data System (ADS)
Evans, A. William, III; Gray, Jeremy P.; Rudnick, Dave; Karlsen, Robert E.
2012-06-01
As more Soldiers seek to utilize robots to enhance their mission capabilities, controls are needed which are intuitive, portable, and adaptable to a wide range of mission tasks. Android™ and iOS™ devices have the potential to meet each of these requirements as well as being based on readily available hardware. This paper will focus on some of the ways in which an Android™ or iOS™ device could be used to control specific and varied robot mobility functions and payload tools. Several small unmanned ground vehicle (SUGV) payload tools will have been investigated at Camp Pendleton during a user assessment and mission feasibility study for automatic remote tool changing. This group of payload tools will provide a basis, to researchers, concerning what types of control functions are needed to fully utilize SUGV robotic capabilities. Additional, mobility functions using tablet devices have been used as part of the Safe Operation of Unmanned systems for Reconnaissance in Complex Environments Army Technology Objective (SOURCE ATO) which is investigating the safe operation of robotics. Using Android™ and iOS™ hand-held devices is not a new concept in robot manipulation. However, the authors of this paper hope to introduce some novel concepts that may serve to make the interaction between Soldier and machine more fluid and intuitive. By creating a better user experience, Android™ and iOS™ devices could help to reduce training time, enhance performance, and increase acceptance of robotics as valuable mission tools for Soldiers.
Large-scale deep learning for robotically gathered imagery for science
NASA Astrophysics Data System (ADS)
Skinner, K.; Johnson-Roberson, M.; Li, J.; Iscar, E.
2016-12-01
With the explosion of computing power, the intelligence and capability of mobile robotics has dramatically increased over the last two decades. Today, we can deploy autonomous robots to achieve observations in a variety of environments ripe for scientific exploration. These platforms are capable of gathering a volume of data previously unimaginable. Additionally, optical cameras, driven by mobile phones and consumer photography, have rapidly improved in size, power consumption, and quality making their deployment cheaper and easier. Finally, in parallel we have seen the rise of large-scale machine learning approaches, particularly deep neural networks (DNNs), increasing the quality of the semantic understanding that can be automatically extracted from optical imagery. In concert this enables new science using a combination of machine learning and robotics. This work will discuss the application of new low-cost high-performance computing approaches and the associated software frameworks to enable scientists to rapidly extract useful science data from millions of robotically gathered images. The automated analysis of imagery on this scale opens up new avenues of inquiry unavailable using more traditional manual or semi-automated approaches. We will use a large archive of millions of benthic images gathered with an autonomous underwater vehicle to demonstrate how these tools enable new scientific questions to be posed.
Navigating a Mobile Robot Across Terrain Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Howard, Ayanna; Bon, Bruce
2003-01-01
A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.
Cooperative system and method using mobile robots for testing a cooperative search controller
Byrne, Raymond H.; Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.
2002-01-01
A test system for testing a controller provides a way to use large numbers of miniature mobile robots to test a cooperative search controller in a test area, where each mobile robot has a sensor, a communication device, a processor, and a memory. A method of using a test system provides a way for testing a cooperative search controller using multiple robots sharing information and communicating over a communication network.
US Army TARDEC Ground Vehicle Mobility: Dynamics Modeling, Simluation, and Research
2011-10-24
DRIVEN. WARFIGHTER FOCUSED. For official use only Stair Climbing of a Small Robot Robotic Vehicle Step Climbing UNCLASSIFIED For official use only...NOTES NASA Jet Propulsion Laboratory, mobility, and robotics section. Briefing to the jet propulsion lab. 14. ABSTRACT N/A 15. SUBJECT TERMS 16...JLTV GCV M2 M915 ASV FTTS HMMWV Platforms Supported APDSmall Robot UNCLASSIFIED For official use only Mobility Events • Vehicle stability • Ride
Extensible Hardware Architecture for Mobile Robots
NASA Technical Reports Server (NTRS)
Park, Eric; Kobayashi, Linda; Lee, Susan Y.
2005-01-01
The Intelligent Robotics Group at NASA Ames Research Center has developed a new mobile robot hardware architecture designed for extensibility and reconfigurability. Currently implemented on the k9 rover. and won to be integrated onto the K10 series of human-robot collaboration research robots, this architecture allows for rapid changes in instrumentation configuration and provides a high degree of modularity through a synergistic mix of off-the-shelf and custom designed components, allowing eased transplantation into a wide vane6 of mobile robot platforms. A component level overview of this architecture is presented along with a description of the changes required for implementation on K10 , followed by plans for future work.
Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study
Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng
2016-01-01
One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298
HERMIES-3: A step toward autonomous mobility, manipulation, and perception
NASA Technical Reports Server (NTRS)
Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.
1989-01-01
HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.
A robot control architecture supported on contraction theory
NASA Astrophysics Data System (ADS)
Silva, Jorge; Sequeira, João; Santos, Cristina
2017-01-01
This paper proposes fundamentals for stability and success of a global system composed by a mobile robot, a real environment and a navigation architecture with time constraints. Contraction theory is a typical framework that provides tools and properties to prove the stability and convergence of the global system to a unique fixed point that identifies the mission success. A stability indicator based on the combination contraction property is developed to identify the mission success as a stability measure. The architecture is fully designed through C1 nonlinear dynamical systems and feedthrough maps, which makes it amenable for contraction analysis. Experiments in a realistic and uncontrolled environment are realised to verify if inherent perturbations of the sensory information and of the environment affect the stability and success of the global system.
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
Differential-Drive Mobile Robot Control Design based-on Linear Feedback Control Law
NASA Astrophysics Data System (ADS)
Nurmaini, Siti; Dewi, Kemala; Tutuko, Bambang
2017-04-01
This paper deals with the problem of how to control differential driven mobile robot with simple control law. When mobile robot moves from one position to another to achieve a position destination, it always produce some errors. Therefore, a mobile robot requires a certain control law to drive the robot’s movement to the position destination with a smallest possible error. In this paper, in order to reduce position error, a linear feedback control is proposed with pole placement approach to regulate the polynoms desired. The presented work leads to an improved understanding of differential-drive mobile robot (DDMR)-based kinematics equation, which will assist to design of suitable controllers for DDMR movement. The result show by using the linier feedback control method with pole placement approach the position error is reduced and fast convergence is achieved.
Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores
2015-09-16
One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot's pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area.
A mobile robot system for ground servicing operations on the space shuttle
NASA Astrophysics Data System (ADS)
Dowling, K.; Bennett, R.; Blackwell, M.; Graham, T.; Gatrall, S.; O'Toole, R.; Schempf, H.
1992-11-01
A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.
A mobile robot system for ground servicing operations on the space shuttle
NASA Technical Reports Server (NTRS)
Dowling, K.; Bennett, R.; Blackwell, M.; Graham, T.; Gatrall, S.; O'Toole, R.; Schempf, H.
1992-01-01
A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.
A neural learning classifier system with self-adaptive constructivism for mobile robot control.
Hurst, Jacob; Bull, Larry
2006-01-01
For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.
Machine vision and appearance based learning
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-03-01
Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.
Wang, Hongwu; Candiotti, Jorge; Shino, Motoki; Chung, Cheng-Shiu; Grindle, Garrett G; Ding, Dan; Cooper, Rory A
2013-07-01
This paper describes the development of a mobile base for the Personal Mobility and Manipulation Appliance Generation II (PerMMA Gen II robotic wheelchair), an obstacle-climbing wheelchair able to move in structured and unstructured environments, and to climb over curbs as high as 8 inches. The mechanical, electrical, and software systems of the mobile base are presented in detail, and similar devices such as the iBOT mobility system, TopChair, and 6X6 Explorer are described. The mobile base of PerMMA Gen II has two operating modes: "advanced driving mode" on flat and uneven terrain, and "automatic climbing mode" during stair climbing. The different operating modes are triggered either by local and dynamic conditions or by external commands from users. A step-climbing sequence, up to 0.2 m, is under development and to be evaluated via simulation. The mathematical model of the mobile base is introduced. A feedback and a feed-forward controller have been developed to maintain the posture of the passenger when driving over uneven surfaces or slopes. The effectiveness of the controller has been evaluated by simulation using the open dynamics engine tool. Future work for PerMMA Gen II mobile base is implementation of the simulation and control on a real system and evaluation of the system via further experimental tests.
Wang, Hongwu; Candiotti, Jorge; Shino, Motoki; Chung, Cheng-Shiu; Grindle, Garrett G.; Ding, Dan; Cooper, Rory A.
2013-01-01
Background This paper describes the development of a mobile base for the Personal Mobility and Manipulation Appliance Generation II (PerMMA Gen II robotic wheelchair), an obstacle-climbing wheelchair able to move in structured and unstructured environments, and to climb over curbs as high as 8 inches. The mechanical, electrical, and software systems of the mobile base are presented in detail, and similar devices such as the iBOT mobility system, TopChair, and 6X6 Explorer are described. Findings The mobile base of PerMMA Gen II has two operating modes: “advanced driving mode” on flat and uneven terrain, and “automatic climbing mode” during stair climbing. The different operating modes are triggered either by local and dynamic conditions or by external commands from users. A step-climbing sequence, up to 0.2 m, is under development and to be evaluated via simulation. The mathematical model of the mobile base is introduced. A feedback and a feed-forward controller have been developed to maintain the posture of the passenger when driving over uneven surfaces or slopes. The effectiveness of the controller has been evaluated by simulation using the open dynamics engine tool. Conclusion Future work for PerMMA Gen II mobile base is implementation of the simulation and control on a real system and evaluation of the system via further experimental tests. PMID:23820149
Industrial-Like Vehicle Platforms for Postgraduate Laboratory Courses on Robotics
ERIC Educational Resources Information Center
Navarro, P. J.; Fernandez, C.; Sanchez, P.
2013-01-01
The interdisciplinary nature of robotics allows mobile robots to be used successfully in a broad range of courses at the postgraduate level and in Ph.D. research. Practical industrial-like mobile robotic demonstrations encourage students and increase their motivation by providing them with learning benefits not achieved with traditional…
Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)
NASA Technical Reports Server (NTRS)
Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey
1990-01-01
Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.
The design of mobile robot control system for the aged and the disabled
NASA Astrophysics Data System (ADS)
Qiang, Wang; Lei, Shi; Xiang, Gao; Jin, Zhang
2017-01-01
This paper designs a control system of mobile robot for the aged and the disabled, which consists of two main parts: human-computer interaction and drive control module. The data of the two parts is transferred via universal asynchronous receiver/transmitter. In the former part, the speed and direction information of the mobile robot is obtained by hall joystick. In the latter part, the electronic differential algorithm is developed to implement the robot mobile function by driving two-wheel motors. In order to improve the comfort of the robot when speed or direction is changed, the least squares algorithm is used to optimize the speed characteristic curves of the two motors. Experimental results have verified the effectiveness of the designed system.
Controlling Herds of Cooperative Robots
NASA Technical Reports Server (NTRS)
Quadrelli, Marco B.
2006-01-01
A document poses, and suggests a program of research for answering, questions of how to achieve autonomous operation of herds of cooperative robots to be used in exploration and/or colonization of remote planets. In a typical scenario, a flock of mobile sensory robots would be deployed in a previously unexplored region, one of the robots would be designated the leader, and the leader would issue commands to move the robots to different locations or aim sensors at different targets to maximize scientific return. It would be necessary to provide for this hierarchical, cooperative behavior even in the face of such unpredictable factors as terrain obstacles. A potential-fields approach is proposed as a theoretical basis for developing methods of autonomous command and guidance of a herd. A survival-of-the-fittest approach is suggested as a theoretical basis for selection, mutation, and adaptation of a description of (1) the body, joints, sensors, actuators, and control computer of each robot, and (2) the connectivity of each robot with the rest of the herd, such that the herd could be regarded as consisting of a set of artificial creatures that evolve to adapt to a previously unknown environment. A distributed simulation environment has been developed to test the proposed approaches in the Titan environment. One blimp guides three surface sondes via a potential field approach. The results of the simulation demonstrate that the method used for control is feasible, even if significant uncertainty exists in the dynamics and environmental models, and that the control architecture provides the autonomy needed to enable surface science data collection.
Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network
2015-08-14
theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The...independently evolving research directions based on physics-based models of mechanical, electromechanical and electronic devices, operational constraints
SyRoTek--Distance Teaching of Mobile Robotics
ERIC Educational Resources Information Center
Kulich, M.; Chudoba, J.; Kosnar, K.; Krajnik, T.; Faigl, J.; Preucil, L.
2013-01-01
E-learning is a modern and effective approach for training in various areas and at different levels of education. This paper gives an overview of SyRoTek, an e-learning platform for mobile robotics, artificial intelligence, control engineering, and related domains. SyRoTek provides remote access to a set of fully autonomous mobile robots placed in…
Design and control of compliant tensegrity robots through simulation and hardware validation.
Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas
2014-09-06
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ('tensile-integrity') structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
A positional estimation technique for an autonomous land vehicle in an unstructured environment
NASA Technical Reports Server (NTRS)
Talluri, Raj; Aggarwal, J. K.
1990-01-01
This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.
Lyapunov vector function method in the motion stabilisation problem for nonholonomic mobile robot
NASA Astrophysics Data System (ADS)
Andreev, Aleksandr; Peregudova, Olga
2017-07-01
In this paper we propose a sampled-data control law in the stabilisation problem of nonstationary motion of nonholonomic mobile robot. We assume that the robot moves on a horizontal surface without slipping. The dynamical model of a mobile robot is considered. The robot has one front free wheel and two rear wheels which are controlled by two independent electric motors. We assume that the controls are piecewise constant signals. Controller design relies on the backstepping procedure with the use of Lyapunov vector-function method. Theoretical considerations are verified by numerical simulation.
System design of a hand-held mobile robot for craniotomy.
Kane, Gavin; Eggers, Georg; Boesecke, Robert; Raczkowsky, Jörg; Wörn, Heinz; Marmulla, Rüdiger; Mühling, Joachim
2009-01-01
This contribution reports the development and initial testing of a Mobile Robot System for Surgical Craniotomy, the Craniostar. A kinematic system based on a unicycle robot is analysed to provide local positioning through two spiked wheels gripping directly onto a patients skull. A control system based on a shared control system between both the Surgeon and Robot is employed in a hand-held design that is tested initially on plastic phantom and swine skulls. Results indicate that the system has substantially lower risk than present robotically assisted craniotomies, and despite being a hand-held mobile robot, the Craniostar is still capable of sub-millimetre accuracy in tracking along a trajectory and thus achieving an accurate transfer of pre-surgical plan to the operating room procedure, without the large impact of current medical robots based on modified industrial robots.
Object Lesson: Discovering and Learning to Recognize Objects
2002-01-01
4 x 4 grid represents the possible appearance of an edge, quantized to just two luminance levels. The dark line centered in the grid is the average...11):33-38, 1995. [16] Maja J. Mataric . A distributed model for mobile robot environment-learning and navigation. Technical Report AIlR- 1228
Segway Robotic Mobility Platform
2004-10-01
involved changes to firmware and creation of an electrical interface to the RMP. The existing HT design offered the possibility of creating a reliable...mounted on its side and swept back and forth by an Amtec pan-and-tilt unit to acquire 3-D scans of the environment. The RMP also carries an
Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.
Wen, Rong; Tay, Wei-Liang; Nguyen, Binh P; Chng, Chin-Boon; Chui, Chee-Kong
2014-09-01
Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Advanced Robotics for Air Force Operations
1989-06-01
evaluated current and potential uses of advanced robotics to support Air Force systems, (2) recommended the most effective aplications of advanced robotics...manpower. Such a robot system would The boom would not only transfer fuel, be considerably more mobile and effi- 10 ADVANCED ROBOTICS FOR AIR FORCE...increased manpower resources in war tive clothing reduce vision, hearing, and make this an attractive potential appli- mobility , which further reduce
Local Free-Space Mapping and Path Guidance for Mobile Robots.
1988-03-01
CM a CD U 00 Technical Document 1227 March 1988 Local Free- Space Mapping o and Path Guidance for Mobile Robots o William T. Gex N’% Nancy L. Campbell...TITLE (inludvSeocutCl&sas~o*) Local Free- Space Mapping and Path Guidance for Mobile Robots 12. PERSONAL AUTHOR(S) William T. Gex and Nancy L...Description of Robot System... 2 Free- Space Mapping ... 4 Map Construction ... 4 . ,12pping Examplk... 5 ’ft Sensor Unreliability... 8 % Path Guidance
The Human Touch: Practical and Ethical Implications of Putting AI and Robotics to Work for Patients.
Banks, Jim
2018-01-01
We live in a time when science fiction can quickly become science fact. Within a generation, the Internet has matured from a technological marvel to a utility, and mobile telephones have redefined how we communicate. Health care, as an industry, is quick to embrace technology, so it is no surprise that the application of programmable robotic systems that can carry out actions automatically and artificial intelligence (AI), e.g., machines that learn, solve problems, and respond to their environment, is being keenly explored.
Vision Guided Intelligent Robot Design And Experiments
NASA Astrophysics Data System (ADS)
Slutzky, G. D.; Hall, E. L.
1988-02-01
The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.
Gait parameters extraction by using mobile robot equipped with Kinect v2
NASA Astrophysics Data System (ADS)
Ogawa, Ami; Mita, Akira; Yorozu, Ayanori; Takahashi, Masaki
2016-04-01
The needs for monitoring systems to be used in houses are getting stronger because of the increase of the single household population due to the low birth rate and longevity. Among others, gait parameters are under the spotlight to be examined as the relations with several diseases have been reported. It is known that the gait parameters obtained at a walk test are different from those obtained under the daily life. Thus, the system which can measure the gait parameters in the real living environment is needed. Generally, gait abilities are evaluated by a measurement test, such as Timed Up and Go test and 6-minute walking test. However, these methods need measurers, so the accuracy depends on them and the lack of objectivity is pointed out. Although, a precise motion capture system is used for more objective measurement, it is hard to be used in daily measurement, because the subjects have to put the markers on their body. To solve this problem, marker less sensors, such as Kinect, are developed and used for gait information acquisition. When they are attached to a mobile robot, there is no limitation of distance. However, they still have challenges of calibration for gait parameters, and the important gait parameters to be acquired are not well examined. Therefore, in this study, we extract the important parameters for gait analysis, which have correlations with diseases and age differences, and suggest the gait parameters extraction from depth data by Kinect v2 which is mounted on a mobile robot aiming at applying to the living environment.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
NASA Astrophysics Data System (ADS)
Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu
2017-02-01
In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.
A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II
NASA Astrophysics Data System (ADS)
Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki
The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.
A Mobile, Map-Based Tasking Interface for Human-Robot Interaction
2010-12-01
A MOBILE, MAP-BASED TASKING INTERFACE FOR HUMAN-ROBOT INTERACTION By Eli R. Hooten Thesis Submitted to the Faculty of the Graduate School of...SUBTITLE A Mobile, Map-Based Tasking Interface for Human-Robot Interaction 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...3 II.1 Interactive Modalities and Multi-Touch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 II.2
Le, Duc Van; Oh, Hoon; Yoon, Seokhoon
2013-07-05
In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay.
Evaluation of a Home Biomonitoring Autonomous Mobile Robot.
Dorronzoro Zubiete, Enrique; Nakahata, Keigo; Imamoglu, Nevrez; Sekine, Masashi; Sun, Guanghao; Gomez, Isabel; Yu, Wenwei
2016-01-01
Increasing population age demands more services in healthcare domain. It has been shown that mobile robots could be a potential solution to home biomonitoring for the elderly. Through our previous studies, a mobile robot system that is able to track a subject and identify his daily living activities has been developed. However, the system has not been tested in any home living scenarios. In this study we did a series of experiments to investigate the accuracy of activity recognition of the mobile robot in a home living scenario. The daily activities tested in the evaluation experiment include watching TV and sleeping. A dataset recorded by a distributed distance-measuring sensor network was used as a reference to the activity recognition results. It was shown that the accuracy is not consistent for all the activities; that is, mobile robot could achieve a high success rate in some activities but a poor success rate in others. It was found that the observation position of the mobile robot and subject surroundings have high impact on the accuracy of the activity recognition, due to the variability of the home living daily activities and their transitional process. The possibility of improvement of recognition accuracy has been shown too.
Van Le, Duc; Oh, Hoon; Yoon, Seokhoon
2013-01-01
In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay. PMID:23881134
Training Toddlers Seated on Mobile Robots to Steer Using Force-Feedback Joystick.
Agrawal, S K; Xi Chen; Ragonesi, C; Galloway, J C
2012-01-01
The broader goal of our research is to train infants with special needs to safely and purposefully drive a mobile robot to explore the environment. The hypothesis is that these impaired infants will benefit from mobility in their early years and attain childhood milestones, similar to their healthy peers. In this paper, we present an algorithm and training method using a force-feedback joystick with an "assist-as-needed" paradigm for driving training. In this "assist-as-needed" approach, if the child steers the joystick outside a force tunnel centered on the desired direction, the driver experiences a bias force on the hand. We show results with a group study on typically developing toddlers that such a haptic guidance algorithm is superior to training with a conventional joystick. We also provide a case study on two special needs children, under three years old, who learn to make sharp turns during driving, when trained over a five-day period with the force-feedback joystick using the algorithm.
Reconfigurable Robust Routing for Mobile Outreach Network
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang
2010-01-01
The Reconfigurable Robust Routing for Mobile Outreach Network (R3MOO N) provides advanced communications networking technologies suitable for the lunar surface environment and applications. The R3MOON techn ology is based on a detailed concept of operations tailored for luna r surface networks, and includes intelligent routing algorithms and wireless mesh network implementation on AGNC's Coremicro Robots. The product's features include an integrated communication solution inco rporating energy efficiency and disruption-tolerance in a mobile ad h oc network, and a real-time control module to provide researchers an d engineers a convenient tool for reconfiguration, investigation, an d management.
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina
2017-01-01
Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage. PMID:28075364
González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina
2017-01-09
Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage.
Intelligent robot control using an adaptive critic with a task control center and dynamic database
NASA Astrophysics Data System (ADS)
Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.
2006-10-01
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.
A Null Space Control of Two Wheels Driven Mobile Manipulator Using Passivity Theory
NASA Astrophysics Data System (ADS)
Shibata, Tsuyoshi; Murakami, Toshiyuki
This paper describes a control strategy of null space motion of a two wheels driven mobile manipulator. Recently, robot is utilized in various industrial fields and it is preferable for the robot manipulator to have multiple degrees of freedom motion. Several studies of kinematics for null space motion have been proposed. However stability analysis of null space motion is not enough. Furthermore, these approaches apply to stable systems, but they do not apply unstable systems. Then, in this research, base of manipulator equips with two wheels driven mobile robot. This robot is called two wheels driven mobile manipulator, which becomes unstable system. In the proposed approach, a control design of null space uses passivity based stabilizing. A proposed controller is decided so that closed-loop system of robot dynamics satisfies passivity. This is passivity based control. Then, control strategy is that stabilizing of the robot system applies to work space observer based approach and null space control while keeping end-effector position. The validity of the proposed approach is verified by simulations and experiments of two wheels driven mobile manipulator.
Li, Yongcheng; Sun, Rong; Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei
2016-01-01
We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.
Portable control device for networked mobile robots
Feddema, John T.; Byrne, Raymond H.; Bryan, Jon R.; Harrington, John J.; Gladwell, T. Scott
2002-01-01
A handheld control device provides a way for controlling one or multiple mobile robotic vehicles by incorporating a handheld computer with a radio board. The device and software use a personal data organizer as the handheld computer with an additional microprocessor and communication device on a radio board for use in controlling one robot or multiple networked robots.
ERIC Educational Resources Information Center
Ortiz, Octavio Ortiz; Pastor Franco, Juan Ángel; Alcover Garau, Pedro María; Herrero Martín, Ruth
2017-01-01
This paper describes a study of teaching a programming language in a C programming course by having students assemble and program a low-cost mobile robot. Writing their own programs to define the robot's behavior raised students' motivation. Working in small groups, students programmed the robots by using the control structures of structured…
Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation
NASA Technical Reports Server (NTRS)
Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri
2002-01-01
The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.
Gesture-Based Robot Control with Variable Autonomy from the JPL Biosleeve
NASA Technical Reports Server (NTRS)
Wolf, Michael T.; Assad, Christopher; Vernacchia, Matthew T.; Fromm, Joshua; Jethani, Henna L.
2013-01-01
This paper presents a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via a novel device, called the BioSleeve, which packages dry-contact surface electromyography (EMG) and an inertial measurement unit (IMU) into a sleeve worn on the forearm. The BioSleeve's accompanying algorithms can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. These gestures and positions are mapped to robot commands that, to varying degrees, integrate with the robot's perception of its environment and its ability to complete tasks autonomously. This flexible approach enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. The BioSleeve is meant for portable field use; unlike other gesture recognition systems, use of the BioSleeve for robot control is invariant to lighting conditions, occlusions, and the human-robot spatial relationship and does not encumber the user's hands. The BioSleeve control approach has been implemented on three robot types, and we present proof-of-principle demonstrations with mobile ground robots, manipulation robots, and prosthetic hands.
Space station automation: the role of robotics and artificial intelligence (Invited Paper)
NASA Astrophysics Data System (ADS)
Park, W. T.; Firschein, O.
1985-12-01
Automation of the space station is necessary to make more effective use of the crew, to carry out repairs that are impractical or dangerous, and to monitor and control the many space station subsystems. Intelligent robotics and expert systems play a strong role in automation, and both disciplines are highly dependent on a common artificial intelligence (Al) technology base. The AI technology base provides the reasoning and planning capabilities needed in robotic tasks, such as perception of the environment and planning a path to a goal, and in expert systems tasks, such as control of subsystems and maintenance of equipment. This paper describes automation concepts for the space station, the specific robotic and expert systems required to attain this automation, and the research and development required. It also presents an evolutionary development plan that leads to fully automatic mobile robots for servicing satellites. Finally, we indicate the sequence of demonstrations and the research and development needed to confirm the automation capabilities. We emphasize that advanced robotics requires AI, and that to advance, AI needs the "real-world" problems provided by robotics.
Mohanraj, A. P.; Elango, A.; Reddy, Mutra Chanakya
2016-01-01
Omnidirectional robots can move in all directions without steering their wheels and it can rotate clockwise and counterclockwise with reference to their axis. In this paper, we focused only on front and back movement, to analyse the square- and triangle-structured omnidirectional robot movements. An omnidirectional mobile robot shows different performances with the different number of wheels and the omnidirectional mobile robot's chassis design. Research is going on in this field to improve the accurate movement capability of omnidirectional mobile robots. This paper presents a design of a unique device of Angle Variable Chassis (AVC) for linear movement analysis of a three-wheeled omnidirectional mobile robot (TWOMR), at various angles (θ) between the wheels. Basic mobility algorithm is developed by varying the angles between the two selected omnidirectional wheels in TWOMR. The experiment is carried out by varying the angles (θ = 30°, 45°, 60°, 90°, and 120°) between the two selected omniwheels and analysing the movement of TWOMR in forward direction and reverse direction on a smooth cement surface. Respectively, it is compared to itself for various angles (θ), to get its advantages and weaknesses. The conclusion of the paper provides effective movement of TWOMR at a particular angle (θ) and also the application of TWOMR in different situations. PMID:26981585
Mohanraj, A P; Elango, A; Reddy, Mutra Chanakya
2016-01-01
Omnidirectional robots can move in all directions without steering their wheels and it can rotate clockwise and counterclockwise with reference to their axis. In this paper, we focused only on front and back movement, to analyse the square- and triangle-structured omnidirectional robot movements. An omnidirectional mobile robot shows different performances with the different number of wheels and the omnidirectional mobile robot's chassis design. Research is going on in this field to improve the accurate movement capability of omnidirectional mobile robots. This paper presents a design of a unique device of Angle Variable Chassis (AVC) for linear movement analysis of a three-wheeled omnidirectional mobile robot (TWOMR), at various angles (θ) between the wheels. Basic mobility algorithm is developed by varying the angles between the two selected omnidirectional wheels in TWOMR. The experiment is carried out by varying the angles (θ = 30°, 45°, 60°, 90°, and 120°) between the two selected omniwheels and analysing the movement of TWOMR in forward direction and reverse direction on a smooth cement surface. Respectively, it is compared to itself for various angles (θ), to get its advantages and weaknesses. The conclusion of the paper provides effective movement of TWOMR at a particular angle (θ) and also the application of TWOMR in different situations.
Exhaustive geographic search with mobile robots along space-filling curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spires, S.V.; Goldsmith, S.Y.
1998-03-01
Swarms of mobile robots can be tasked with searching a geographic region for targets of interest, such as buried land mines. The authors assume that the individual robots are equipped with sensors tuned to the targets of interest, that these sensors have limited range, and that the robots can communicate with one another to enable cooperation. How can a swarm of cooperating sensate robots efficiently search a given geographic region for targets in the absence of a priori information about the target`s locations? Many of the obvious approaches are inefficient or lack robustness. One efficient approach is to have themore » robots traverse a space-filling curve. For many geographic search applications, this method is energy-frugal, highly robust, and provides guaranteed coverage in a finite time that decreases as the reciprocal of the number of robots sharing the search task. Furthermore, it minimizes the amount of robot-to-robot communication needed for the robots to organize their movements. This report presents some preliminary results from applying the Hilbert space-filling curve to geographic search by mobile robots.« less
Mobile Phenotyping System Using an Aeromotively Stabilized Cable-Driven Robot
NASA Astrophysics Data System (ADS)
Newman, M. B.; Zygielbaum, A. I.
2017-12-01
Agricultural researchers are constantly attempting to generate superior agricultural crops. Whether this means creating crops with greater yield, crops that are more resilient to disease, or crops that can tolerate harsh environments with fewer failures, test plots of these experimental crops must be studied in real-world environments with minimal invasion to determine how they will perform in full-scale agricultural settings. To monitor these crops without interfering with their natural growth, a noninvasive sensor system has been implemented. This system, instituted by the College of Agricultural Sciences and Natural Resources at the University of Nebraska - Lincoln (UNL), uses a system of poles, cables, and winches to support and maneuver a sensor platform above the crops at an outdoor phenotyping site. In this work, we improve upon the UNL outdoor phenotyping system presenting the concept design for a mobile, cable-driven phenotyping system as opposed to a permanent phenotyping facility. One major challenge in large-scale, cable-driven robots is stability of the end-effector. As a result, this mobile system seeks to use a novel method of end-effector stabilization using an onboard rotor drive system, herein referred to as the Instrument Platform Aeromotive Stabilization System (IPASS). A prototype system is developed and analyzed to determine the viability of IPASS.
Verification hybrid control of a wheeled mobile robot and manipulator
NASA Astrophysics Data System (ADS)
Muszynska, Magdalena; Burghardt, Andrzej; Kurc, Krzysztof; Szybicki, Dariusz
2016-04-01
In this article, innovative approaches to realization of the wheeled mobile robots and manipulator tracking are presented. Conceptions include application of the neural-fuzzy systems to compensation of the controlled system's nonlinearities in the tracking control task. Proposed control algorithms work on-line, contain structure, that adapt to the changeable work conditions of the controlled systems, and do not require the preliminary learning. The algorithm was verification on the real object which was a Scorbot - ER 4pc robotic manipulator and a Pioneer - 2DX mobile robot.
Adaptive Control for Autonomous Navigation of Mobile Robots Considering Time Delay and Uncertainty
NASA Astrophysics Data System (ADS)
Armah, Stephen Kofi
Autonomous control of mobile robots has attracted considerable attention of researchers in the areas of robotics and autonomous systems during the past decades. One of the goals in the field of mobile robotics is development of platforms that robustly operate in given, partially unknown, or unpredictable environments and offer desired services to humans. Autonomous mobile robots need to be equipped with effective, robust and/or adaptive, navigation control systems. In spite of enormous reported work on autonomous navigation control systems for mobile robots, achieving the goal above is still an open problem. Robustness and reliability of the controlled system can always be improved. The fundamental issues affecting the stability of the control systems include the undesired nonlinear effects introduced by actuator saturation, time delay in the controlled system, and uncertainty in the model. This research work develops robustly stabilizing control systems by investigating and addressing such nonlinear effects through analytical, simulations, and experiments. The control systems are designed to meet specified transient and steady-state specifications. The systems used for this research are ground (Dr Robot X80SV) and aerial (Parrot AR.Drone 2.0) mobile robots. Firstly, an effective autonomous navigation control system is developed for X80SV using logic control by combining 'go-to-goal', 'avoid-obstacle', and 'follow-wall' controllers. A MATLAB robot simulator is developed to implement this control algorithm and experiments are conducted in a typical office environment. The next stage of the research develops an autonomous position (x, y, and z) and attitude (roll, pitch, and yaw) controllers for a quadrotor, and PD-feedback control is used to achieve stabilization. The quadrotor's nonlinear dynamics and kinematics are implemented using MATLAB S-function to generate the state output. Secondly, the white-box and black-box approaches are used to obtain a linearized second-order altitude models for the quadrotor, AR.Drone 2.0. Proportional (P), pole placement or proportional plus velocity (PV), linear quadratic regulator (LQR), and model reference adaptive control (MRAC) controllers are designed and validated through simulations using MATLAB/Simulink. Control input saturation and time delay in the controlled systems are also studied. MATLAB graphical user interface (GUI) and Simulink programs are developed to implement the controllers on the drone. Thirdly, the time delay in the drone's control system is estimated using analytical and experimental methods. In the experimental approach, the transient properties of the experimental altitude responses are compared to those of simulated responses. The analytical approach makes use of the Lambert W function to obtain analytical solutions of scalar first-order delay differential equations (DDEs). A time-delayed P-feedback control system (retarded type) is used in estimating the time delay. Then an improved system performance is obtained by incorporating the estimated time delay in the design of the PV control system (neutral type) and PV-MRAC control system. Furthermore, the stability of a parametric perturbed linear time-invariant (LTI) retarded-type system is studied. This is done by analytically calculating the stability radius of the system. Simulation of the control system is conducted to confirm the stability. This robust control design and uncertainty analysis are conducted for first-order and second-order quadrotor models. Lastly, the robustly designed PV and PV-MRAC control systems are used to autonomously track multiple waypoints. Also, the robustness of the PV-MRAC controller is tested against a baseline PV controller using the payload capability of the drone. It is shown that the PV-MRAC offers several benefits over the fixed-gain approach of the PV controller. The adaptive control is found to offer enhanced robustness to the payload fluctuations.
Homography-based visual servo regulation of mobile robots.
Fang, Yongchun; Dixon, Warren E; Dawson, Darren M; Chawda, Prakash
2005-10-01
A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.
Optimal sensor fusion for land vehicle navigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, J.D.
1990-10-01
Position location is a fundamental requirement in autonomous mobile robots which record and subsequently follow x,y paths. The Dept. of Energy, Office of Safeguards and Security, Robotic Security Vehicle (RSV) program involves the development of an autonomous mobile robot for patrolling a structured exterior environment. A straight-forward method for autonomous path-following has been adopted and requires digitizing'' the desired road network by storing x,y coordinates every 2m along the roads. The position location system used to define the locations consists of a radio beacon system which triangulates position off two known transponders, and dead reckoning with compass and odometer. Thismore » paper addresses the problem of combining these two measurements to arrive at a best estimate of position. Two algorithms are proposed: the optimal'' algorithm treats the measurements as random variables and minimizes the estimate variance, while the average error'' algorithm considers the bias in dead reckoning and attempts to guarantee an average error. Data collected on the algorithms indicate that both work well in practice. 2 refs., 7 figs.« less
Information-Driven Active Audio-Visual Source Localization
Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph
2015-01-01
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619
Hardware platform for multiple mobile robots
NASA Astrophysics Data System (ADS)
Parzhuber, Otto; Dolinsky, D.
2004-12-01
This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.
A dragline-forming mobile robot inspired by spiders.
Wang, Liyu; Culha, Utku; Iida, Fumiya
2014-03-01
Mobility of wheeled or legged machines can be significantly increased if they are able to move from a solid surface into a three-dimensional space. Although that may be achieved by addition of flying mechanisms, the payload fraction will be the limiting factor in such hybrid mobile machines for many applications. Inspired by spiders producing draglines to assist locomotion, the paper proposes an alternative mobile technology where a robot achieves locomotion from a solid surface into a free space. The technology resembles the dragline production pathway in spiders to a technically feasible degree and enables robots to move with thermoplastic spinning of draglines. As an implementation, a mobile robot has been prototyped with thermoplastic adhesives as source material of the draglines. Experimental results show that a dragline diameter range of 1.17-5.27 mm was achievable by the 185 g mobile robot in descending locomotion from the solid surface of a hanging structure with a power consumption of 4.8 W and an average speed of 5.13 cm min(-1). With an open-loop controller consisting of sequences of discrete events, the robot has demonstrated repeatable dragline formation with a relative deviation within -4% and a length close to the metre scale.
NASA Astrophysics Data System (ADS)
Ou, Meiying; Sun, Haibin; Gu, Shengwei; Zhang, Yangyi
2017-11-01
This paper investigates the distributed finite-time trajectory tracking control for a group of nonholonomic mobile robots with time-varying unknown parameters and external disturbances. At first, the tracking error system is derived for each mobile robot with the aid of a global invertible transformation, which consists of two subsystems, one is a first-order subsystem and another is a second-order subsystem. Then, the two subsystems are studied respectively, and finite-time disturbance observers are proposed for each robot to estimate the external disturbances. Meanwhile, distributed finite-time tracking controllers are developed for each mobile robot such that all states of each robot can reach the desired value in finite time, where the desired reference value is assumed to be the trajectory of a virtual leader whose information is available to only a subset of the followers, and the followers are assumed to have only local interaction. The effectiveness of the theoretical results is finally illustrated by numerical simulations.
A Developmental Learning Approach of Mobile Manipulator via Playing
Wu, Ruiqi; Zhou, Changle; Chao, Fei; Zhu, Zuyuan; Lin, Chih-Min; Yang, Longzhi
2017-01-01
Inspired by infant development theories, a robotic developmental model combined with game elements is proposed in this paper. This model does not require the definition of specific developmental goals for the robot, but the developmental goals are implied in the goals of a series of game tasks. The games are characterized into a sequence of game modes based on the complexity of the game tasks from simple to complex, and the task complexity is determined by the applications of developmental constraints. Given a current mode, the robot switches to play in a more complicated game mode when it cannot find any new salient stimuli in the current mode. By doing so, the robot gradually achieves it developmental goals by playing different modes of games. In the experiment, the game was instantiated into a mobile robot with the playing task of picking up toys, and the game is designed with a simple game mode and a complex game mode. A developmental algorithm, “Lift-Constraint, Act and Saturate,” is employed to drive the mobile robot move from the simple mode to the complex one. The experimental results show that the mobile manipulator is able to successfully learn the mobile grasping ability after playing simple and complex games, which is promising in developing robotic abilities to solve complex tasks using games. PMID:29046632
NASA Astrophysics Data System (ADS)
Dima, M.; Francu, C.
2016-08-01
This paper presents a way to expand the field of use of the laser tracker and SmartTrack sensor localization device used in lately for the localisation of the end effector of the industrial robots to the localization of the mobile construction robots. The research paper presents the equipment along with its characteristics, determines the relationships for the localization coordinates by comparison to the forward kinematics of the industrial robot's spherical arm (positioning mechanism in spherical coordinates) and the orientation mechanism with three revolute axes. In the end of the paper the accuracy of the mobile robot's localization is analysed.
Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search
Song, Kai; Liu, Qi; Wang, Qi
2011-01-01
Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401
Fast instantaneous center of rotation estimation algorithm for a skied-steered robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2015-05-01
Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.
Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.
Lee, Donghwa; Myung, Hyun
2014-07-11
In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system.
Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors
Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar
2015-01-01
This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289
A neural network-based exploratory learning and motor planning system for co-robots
Galbraith, Byron V.; Guenther, Frank H.; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or “learning by doing,” an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object. PMID:26257640
Using Visual Odometry to Estimate Position and Attitude
NASA Technical Reports Server (NTRS)
Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark
2007-01-01
A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.
A neural network-based exploratory learning and motor planning system for co-robots.
Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.
Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development.
Bostelman, Roger; Hong, Tsai; Legowik, Steven
2016-01-01
Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems.
Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development
Bostelman, Roger; Hong, Tsai; Legowik, Steven
2017-01-01
Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems. PMID:28690359
NASA Astrophysics Data System (ADS)
Konolige, Kurt G.; Gutmann, Steffen; Guzzoni, Didier; Ficklin, Robert W.; Nicewarner, Keith E.
1999-08-01
Mobile robot hardware and software is developing to the point where interesting applications for groups of such robots can be contemplated. We envision a set of mobots acting to map and perform surveillance or other task within an indoor environment (the Sense Net). A typical application of the Sense Net would be to detect survivors in buildings damaged by earthquake or other disaster, where human searchers would be put a risk. As a team, the Sense Net could reconnoiter a set of buildings faster, more reliably, and more comprehensibly than an individual mobot. The team, for example, could dynamically form subteams to perform task that cannot be done by individual robots, such as measuring the range to a distant object by forming a long baseline stereo sensor form a pari of mobots. In addition, the team could automatically reconfigure itself to handle contingencies such as disabled mobots. This paper is a report of our current progress in developing the Sense Net, after the first year of a two-year project. In our approach, each mobot has sufficient autonomy to perform several tasks, such as mapping unknown areas, navigating to specific positions, and detecting, tracking, characterizing, and classifying human and vehicular activity. We detail how some of these tasks are accomplished, and how the mobot group is tasked.
Development of robotic mobile platform with the universal chassis system
NASA Astrophysics Data System (ADS)
Ryadchikov, I.; Nikulchev, E.; Sechenev, S.; Drobotenko, M.; Svidlov, A.; Volkodav, P.; Feshin, A.
2018-02-01
The problem of stabilizing the position of mobile devices is extremely relevant at the modern level of technology development. This includes the problem of stabilizing aircraft and stabilizing the pitching of ships. In the laboratory of robotics and mechatronics of the Kuban State University, a robot is developed. The robot has additional internal degrees of freedom, responsible for compensating for deflections - the dynamic stabilization system.
A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring
Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio
2016-01-01
This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505
Assistant Personal Robot (APR): Conception and Application of a Tele-Operated Assisted Living Robot.
Clotet, Eduard; Martínez, Dani; Moreno, Javier; Tresanchez, Marcel; Palacín, Jordi
2016-04-28
This paper presents the technical description, mechanical design, electronic components, software implementation and possible applications of a tele-operated mobile robot designed as an assisted living tool. This robotic concept has been named Assistant Personal Robot (or APR for short) and has been designed as a remotely telecontrolled robotic platform built to provide social and assistive services to elderly people and those with impaired mobility. The APR features a fast high-mobility motion system adapted for tele-operation in plain indoor areas, which incorporates a high-priority collision avoidance procedure. This paper presents the mechanical architecture, electrical fundaments and software implementation required in order to develop the main functionalities of an assistive robot. The APR uses a tablet in order to implement the basic peer-to-peer videoconference and tele-operation control combined with a tactile graphic user interface. The paper also presents the development of some applications proposed in the framework of an assisted living robot.
NASA Astrophysics Data System (ADS)
Tamura, Sho; Maeyama, Shoichi
Rescue robots have been actively developed since Hanshin-Awaji (Kobe) Earthquake. Recently, the rescue robot to reduce the risk of the secondary disaster on NBC terror and critical accident is also developed. For such a background, the development project of mobile RT system in the collapsed is started. This research also participates in this project. It is useful to use the image pointing for the control interface of the rescue robot because it can control the robot by the simple operation. However, the conventional method cannot work on a rough terrain. In this research, we propose the system which controls the robot to arrive the target position on the rough terrain. It is constructed the methods which put the destination into the vector, and control the 3D localizated robot to follow the vector. Finally, the proposed system is evaluated through experiments by remote control of a mobile robot in slope and cofirmed the feasibility.
NASA Astrophysics Data System (ADS)
Singh, Surya P. N.; Thayer, Scott M.
2002-02-01
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.
Sandia National Laboratories proof-of-concept robotic security vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrington, J.J.; Jones, D.P.; Klarer, P.R.
1989-01-01
Several years ago Sandia National Laboratories developed a prototype interior robot that could navigate autonomously inside a large complex building to air and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modified andmore » integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities. 2 refs., 3 figs.« less
Task Adaptive Walking Robots for Mars Surface Exploration
NASA Technical Reports Server (NTRS)
Huntsberger, Terry; Hickey, Gregory; Kennedy, Brett; Aghazarian, Hrand
2000-01-01
There are exciting opportunities for robot science that lie beyond the reach of current manipulators, rovers, balloons, penetrators, etc. Examples include mobile explorations of the densely cratered Mars highlands, of asteroids, and of moons. These sites are believed to be rich in geologic history and mineralogical detail, but are difficult to robotically access and sample. The surface terrains are rough and changeable, with variable porosity and dust layering; and the small bodies present further challenges of low-temperature, micro-gravity environments. Even the more benign areas of Mars are highly variegated in character (>VL2 rock densities), presenting significant risk to conventional rovers. The development of compact walking robots would have applications to the current mission set for Mars surface exploration, as well as enabling future Mars Outpost missions, asteroid rendezvous missions for the Solar System Exploration Program (SSE) and the mechanical assembly/inspection of large space platforms for the Human Exploration and Development of Spaces (HEDS).
Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness
NASA Technical Reports Server (NTRS)
Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.
2009-01-01
Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.
Design and implementation air quality monitoring robot
NASA Astrophysics Data System (ADS)
Chen, Yuanhua; Li, Jie; Qi, Chunxue
2017-01-01
Robot applied in environmental protection can break through the limitations in working environment, scope and mode of the existing environmental monitoring and pollution abatement equipments, which undertake the innovation and improvement in the basin, atmosphere, emergency and pollution treatment facilities. Actually, the relevant technology is backward with limited research and investment. Though the device companies have achieved some results in the study on the water quality monitoring, pipeline monitoring and sewage disposal, this technological progress on the whole is still much slow, and the mature product has not been formed. As a result, the market urges a demand of a new type of device which is more suitable for environmental protection on the basis of robot successfully applied in other fields. This paper designs and realizes a tracked mobile robot of air quality monitoring, which can be used to monitor air quality for the pollution accident in industrial parks and regular management.
An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry
NASA Astrophysics Data System (ADS)
Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro
This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.
The Embudito Mission: A Case Study of the Systematics of Autonomous Ground Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
EICKER,PATRICK J.
2001-02-01
Ground mobile robots are much in the mind of defense planners at this time, being considered for a significant variety of missions with a diversity ranging from logistics supply to reconnaissance and surveillance. While there has been a very large amount of basic research funded in the last quarter century devoted to mobile robots and their supporting component technologies, little of this science base has been fully developed and deployed--notable exceptions being NASA's Mars rover and several terrestrial derivatives. The material in this paper was developed as a first exemplary step in the development of a more systematic approach tomore » the R and D of ground mobile robots.« less
A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-01-01
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331
A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-04-08
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.
Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei
2016-01-01
We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot’s performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks. PMID:27806074
Evolutionary programming-based univector field navigation method for past mobile robots.
Kim, Y J; Kim, J H; Kwon, D S
2001-01-01
Most of navigation techniques with obstacle avoidance do not consider the robot orientation at the target position. These techniques deal with the robot position only and are independent of its orientation and velocity. To solve these problems this paper proposes a novel univector field method for fast mobile robot navigation which introduces a normalized two dimensional vector field. The method provides fast moving robots with the desired posture at the target position and obstacle avoidance. To obtain the sub-optimal vector field, a function approximator is used and trained by evolutionary programming. Two kinds of vector fields are trained, one for the final posture acquisition and the other for obstacle avoidance. Computer simulations and real experiments are carried out for a fast moving mobile robot to demonstrate the effectiveness of the proposed scheme.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Interaction dynamics of multiple mobile robots with simple navigation strategies
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.
Cooperative Robots to Observe Moving Targets: Review.
Khan, Asif; Rinner, Bernhard; Cavallaro, Andrea
2018-01-01
The deployment of multiple robots for achieving a common goal helps to improve the performance, efficiency, and/or robustness in a variety of tasks. In particular, the observation of moving targets is an important multirobot application that still exhibits numerous open challenges, including the effective coordination of the robots. This paper reviews control techniques for cooperative mobile robots monitoring multiple targets. The simultaneous movement of robots and targets makes this problem particularly interesting, and our review systematically addresses this cooperative multirobot problem for the first time. We classify and critically discuss the control techniques: cooperative multirobot observation of multiple moving targets, cooperative search, acquisition, and track, cooperative tracking, and multirobot pursuit evasion. We also identify the five major elements that characterize this problem, namely, the coordination method, the environment, the target, the robot and its sensor(s). These elements are used to systematically analyze the control techniques. The majority of the studied work is based on simulation and laboratory studies, which may not accurately reflect real-world operational conditions. Importantly, while our systematic analysis is focused on multitarget observation, our proposed classification is useful also for related multirobot applications.
NASA Astrophysics Data System (ADS)
Hsu, Roy CHaoming; Jian, Jhih-Wei; Lin, Chih-Chuan; Lai, Chien-Hung; Liu, Cheng-Ting
2013-01-01
The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.
Passive Infrared Thermographic Imaging for Mobile Robot Object Identification
NASA Astrophysics Data System (ADS)
Hinders, M. K.; Fehlman, W. L.
2010-02-01
The usefulness of thermal infrared imaging as a mobile robot sensing modality is explored, and a set of thermal-physical features used to characterize passive thermal objects in outdoor environments is described. Objects that extend laterally beyond the thermal camera's field of view, such as brick walls, hedges, picket fences, and wood walls as well as compact objects that are laterally within the thermal camera's field of view, such as metal poles and tree trunks, are considered. Classification of passive thermal objects is a subtle process since they are not a source for their own emission of thermal energy. A detailed analysis is included of the acquisition and preprocessing of thermal images, as well as the generation and selection of thermal-physical features from these objects within thermal images. Classification performance using these features is discussed, as a precursor to the design of a physics-based model to automatically classify these objects.
NASA Astrophysics Data System (ADS)
Zheng, Li; Yi, Ruan
2009-11-01
Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.
Omni-Directional Scanning Localization Method of a Mobile Robot Based on Ultrasonic Sensors.
Mu, Wei-Yi; Zhang, Guang-Peng; Huang, Yu-Mei; Yang, Xin-Gang; Liu, Hong-Yan; Yan, Wen
2016-12-20
Improved ranging accuracy is obtained by the development of a novel ultrasonic sensor ranging algorithm, unlike the conventional ranging algorithm, which considers the divergence angle and the incidence angle of the ultrasonic sensor synchronously. An ultrasonic sensor scanning method is developed based on this algorithm for the recognition of an inclined plate and to obtain the localization of the ultrasonic sensor relative to the inclined plate reference frame. The ultrasonic sensor scanning method is then leveraged for the omni-directional localization of a mobile robot, where the ultrasonic sensors are installed on a mobile robot and follow the spin of the robot, the inclined plate is recognized and the position and posture of the robot are acquired with respect to the coordinate system of the inclined plate, realizing the localization of the robot. Finally, the localization method is implemented into an omni-directional scanning localization experiment with the independently researched and developed mobile robot. Localization accuracies of up to ±3.33 mm for the front, up to ±6.21 for the lateral and up to ±0.20° for the posture are obtained, verifying the correctness and effectiveness of the proposed localization method.
Bioinspired Intelligent Algorithm and Its Applications for Mobile Robot Control: A Survey.
Ni, Jianjun; Wu, Liuying; Fan, Xinnan; Yang, Simon X
2016-01-01
Bioinspired intelligent algorithm (BIA) is a kind of intelligent computing method, which is with a more lifelike biological working mechanism than other types. BIAs have made significant progress in both understanding of the neuroscience and biological systems and applying to various fields. Mobile robot control is one of the main application fields of BIAs which has attracted more and more attention, because mobile robots can be used widely and general artificial intelligent algorithms meet a development bottleneck in this field, such as complex computing and the dependence on high-precision sensors. This paper presents a survey of recent research in BIAs, which focuses on the research in the realization of various BIAs based on different working mechanisms and the applications for mobile robot control, to help in understanding BIAs comprehensively and clearly. The survey has four primary parts: a classification of BIAs from the biomimetic mechanism, a summary of several typical BIAs from different levels, an overview of current applications of BIAs in mobile robot control, and a description of some possible future directions for research.
Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration
Losada, Diego P.; Fernández, Joaquín L.; Paz, Enrique; Sanz, Rafael
2017-01-01
In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead. PMID:28467381
Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration.
Losada, Diego P; Fernández, Joaquín L; Paz, Enrique; Sanz, Rafael
2017-05-03
In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead.
Bioinspired Intelligent Algorithm and Its Applications for Mobile Robot Control: A Survey
Ni, Jianjun; Wu, Liuying; Fan, Xinnan; Yang, Simon X.
2016-01-01
Bioinspired intelligent algorithm (BIA) is a kind of intelligent computing method, which is with a more lifelike biological working mechanism than other types. BIAs have made significant progress in both understanding of the neuroscience and biological systems and applying to various fields. Mobile robot control is one of the main application fields of BIAs which has attracted more and more attention, because mobile robots can be used widely and general artificial intelligent algorithms meet a development bottleneck in this field, such as complex computing and the dependence on high-precision sensors. This paper presents a survey of recent research in BIAs, which focuses on the research in the realization of various BIAs based on different working mechanisms and the applications for mobile robot control, to help in understanding BIAs comprehensively and clearly. The survey has four primary parts: a classification of BIAs from the biomimetic mechanism, a summary of several typical BIAs from different levels, an overview of current applications of BIAs in mobile robot control, and a description of some possible future directions for research. PMID:26819582
Integrating Mobile Robotics and Vision with Undergraduate Computer Science
ERIC Educational Resources Information Center
Cielniak, G.; Bellotto, N.; Duckett, T.
2013-01-01
This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…
Speed Daemon: Experience-Based Mobile Robot Speed Scheduling
2014-10-01
a wheeled mobile robot. Robotica , 20(2): 181–193, 2002. [7] O. Purwin and R. D‘Andrea. Trajectory generation and control for four wheeled...robot on an uneven surface. Robotica , 27(4):481–498, 2009. [9] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale
A Contest-Oriented Project for Learning Intelligent Mobile Robots
ERIC Educational Resources Information Center
Huang, Hsin-Hsiung; Su, Juing-Huei; Lee, Chyi-Shyong
2013-01-01
A contest-oriented project for undergraduate students to learn implementation skills and theories related to intelligent mobile robots is presented in this paper. The project, related to Micromouse, Robotrace (Robotrace is the title of Taiwanese and Japanese robot races), and line-maze contests was developed by the embedded control system research…
Waldman, Genna; Yang, Chung-Yong; Ren, Yupeng; Liu, Lin; Guo, Xin; Harvey, Richard L; Roth, Elliot J; Zhang, Li-Qun
2013-01-01
To investigate the effects of controlled passive stretching and active movement training using a portable rehabilitation robot on stroke survivors with ankle and mobility impairment. Twenty-four patients at least 3 months post stroke were assigned to receive 6 week training using the portable robot in a research laboratory (robot group) or an instructed exercise program at home (control group). All patients underwent clinical and biomechanical evaluations in the laboratory at pre-evaluation, post-evaluation, and 6-week follow-up. Subjects in the robot group improved significantly more than that in the control group in reduction in spasticity measured by modified Ashworth scale, mobility by Stroke Rehabilitation Assessment of Movement (STREAM), the balance by Berg balance score, dorsiflexion passive range of motion, dorsiflexion strength, and load bearing on the affected limb during gait after 6-week training. Both groups improved in the STREAM, dorsiflexion active range of motion and dorsiflexor strength after the training, which were retained in the follow-up evaluation. Robot-assisted passive stretching and active movement training is effective in improving motor function and mobility post stroke.
Carreño, Francisco; Post, Mark A
2018-01-01
Efforts in the research of tensegrity structures applied to mobile robots have recently been focused on a purely tensegrity solution to all design requirements. Locomotion systems based on tensegrity structures are currently slow and complex to control. Although wheeled locomotion provides better efficiency over distances there is no literature available on the value of wheeled methods with respect to tensegrity designs, nor on how to transition from a tensegrity structure to a fixed structure in mobile robotics. This paper is the first part of a larger study that aims to combine the flexibility, light weight, and strength of a tensegrity structure with the efficiency and simple control of a wheeled locomotion system. It focuses on comparing different types of tensegrity structure for applicability to a mobile robot, and experimentally finding an appropriate transitional region from a tensegrity structure to a conventional fixed structure on mobile robots. It applies this transitional structure to what is, to the authors' knowledge, the design of the world's first wheeled tensegrity robot that has been designed with the goal of traversing air ducts.
Ko, Nak Yong; Kuc, Tae-Yong
2015-01-01
This paper proposes a method for mobile robot localization in a partially unknown indoor environment. The method fuses two types of range measurements: the range from the robot to the beacons measured by ultrasonic sensors and the range from the robot to the walls surrounding the robot measured by a laser range finder (LRF). For the fusion, the unscented Kalman filter (UKF) is utilized. Because finding the Jacobian matrix is not feasible for range measurement using an LRF, UKF has an advantage in this situation over the extended KF. The locations of the beacons and range data from the beacons are available, whereas the correspondence of the range data to the beacon is not given. Therefore, the proposed method also deals with the problem of data association to determine which beacon corresponds to the given range data. The proposed approach is evaluated using different sets of design parameter values and is compared with the method that uses only an LRF or ultrasonic beacons. Comparative analysis shows that even though ultrasonic beacons are sparsely populated, have a large error and have a slow update rate, they improve the localization performance when fused with the LRF measurement. In addition, proper adjustment of the UKF design parameters is crucial for full utilization of the UKF approach for sensor fusion. This study contributes to the derivation of a UKF-based design methodology to fuse two exteroceptive measurements that are complementary to each other in localization. PMID:25970259
Adaptive Behavior for Mobile Robots
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2009-01-01
The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.
A low cost indoor localization system for mobile robot experimental setup
NASA Astrophysics Data System (ADS)
Adinandra, S.; Syarif, A.
2018-04-01
Indoor localization becomes one of the most important part in mobile robot system One fundamental requirement is to provide an easy-to-use and practical localization system for real-time experiments. In this paper we propose a combination of a recent open source virtual reality (VR) tools, a simple MATLAB code and a low cost USB webcam as an indoor mobile robot localization system Using the VR tools as a server and MATLAB as a client, the proposed solution can cover up to 1.6 [m] × 3.2 [m] with the measurement position accuracy up to 1.2 [cm]. The system is insensitive to light, easy to move and can be quickly set up. A series of successful real-time experiments with three different mobile robot types has been conducted.
NASA Astrophysics Data System (ADS)
Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan
2010-02-01
The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.
Task-level control for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid
1994-01-01
Task-level control refers to the integration and coordination of planning, perception, and real-time control to achieve given high-level goals. Autonomous mobile robots need task-level control to effectively achieve complex tasks in uncertain, dynamic environments. This paper describes the Task Control Architecture (TCA), an implemented system that provides commonly needed constructs for task-level control. Facilities provided by TCA include distributed communication, task decomposition and sequencing, resource management, monitoring and exception handling. TCA supports a design methodology in which robot systems are developed incrementally, starting first with deliberative plans that work in nominal situations, and then layering them with reactive behaviors that monitor plan execution and handle exceptions. To further support this approach, design and analysis tools are under development to provide ways of graphically viewing the system and validating its behavior.
Autonomous intelligent assembly systems LDRD 105746 final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2013-04-01
This report documents a three-year to develop technology that enables mobile robots to perform autonomous assembly tasks in unstructured outdoor environments. This is a multi-tier problem that requires an integration of a large number of different software technologies including: command and control, estimation and localization, distributed communications, object recognition, pose estimation, real-time scanning, and scene interpretation. Although ultimately unsuccessful in achieving a target brick stacking task autonomously, numerous important component technologies were nevertheless developed. Such technologies include: a patent-pending polygon snake algorithm for robust feature tracking, a color grid algorithm for uniquely identification and calibration, a command and control frameworkmore » for abstracting robot commands, a scanning capability that utilizes a compact robot portable scanner, and more. This report describes this project and these developed technologies.« less
Method of mobile robot indoor navigation by artificial landmarks with use of computer vision
NASA Astrophysics Data System (ADS)
Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.
2018-05-01
The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.
2003-06-08
KENNEDY SPACE CENTER, FLA. - The Mobile Service Tower is rolled back at Launch Complex 17A to reveal a Delta II rocket ready to launch the Mars Exploration Rover-A mission. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans are not yet able to go. MER-A, with the rover Spirit aboard, is scheduled to launch on June 8 at 2:06 p.m. EDT, with two launch opportunities each day during a launch period that closes on June 24.
An intelligent space for mobile robot localization using a multi-camera system.
Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel
2014-08-15
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System
Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.
2014-01-01
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009
Robotic acquisition programs: technical and performance challenges
NASA Astrophysics Data System (ADS)
Thibadoux, Steven A.
2002-07-01
The Unmanned Ground Vehicles/ Systems Joint Project Office (UGV/S JPO) is developing and fielding a variety of tactical robotic systems for the Army and Marine Corps. The Standardized Robotic System (SRS) provides a family of common components that can be installed in existing military vehicles, to allow unmanned operation of the vehicle and its payloads. The Robotic Combat Support System (RCSS) will be a medium sized unmanned system with interchangeable attachments, allowing a remote operator to perform a variety of engineering tasks. The Gladiator Program is a USMC initiative for a small to medium sized, highly mobile UGV to conduct scout/ surveillance missions and to carry various lethal and non-lethal payloads. Acquisition plans for these programs require preplanned evolutionary block upgrades to add operational capability, as new technology becomes available. This paper discusses technical and performance issues that must be resolved and the enabling technologies needed for near term block upgrades of these first generation robotic systems. Additionally, two Joint Robotics Program (JRP) initiatives, Robotic Acquisition through Virtual Environments and Networked Simulations (RAVENS) and Joint Architecture for Unmanned Ground Systems (JAUGS), will be discussed. RAVENS and JAUGS will be used to efficiently evaluate and integrate new technologies to be incorporated in system upgrades.
2017-03-01
ARL-TN-0814 ● MAR 2017 US Army Research Laboratory Usability Study and Heuristic Evaluation of the Applied Robotics for...ARL-TN-0814 ● MAR 2017 US Army Research Laboratory Usability Study and Heuristic Evaluation of the Applied Robotics for...Heuristic Evaluation of the Applied Robotics for Installations and Base Operations (ARIBO) Driverless Vehicle Reservation Application ARIBO Mobile 5a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.; Weisbin, C.R.; Pin, F.G.
1989-01-01
This paper reviews ongoing and planned research with mobile autonomous robots at the Oak Ridge National Laboratory (ORNL), Center for Engineering Systems Advanced Research (CESAR). Specifically we report on results obtained with the robot HERMIES-IIB in navigation, intelligent sensing, learning, and on-board parallel computing in support of these functions. We briefly summarize an experiment with HERMIES-IIB that demonstrates the capability of smooth transitions between robot autonomy and tele-operation. This experiment results from collaboration among teams at the Universities of Florida, Michigan, Tennessee, and Texas; and ORNL in a program targeted at robotics for advanced nuclear power stations. We conclude bymore » summarizing ongoing R D with our new mobile robot HERMIES-III which is equipped with a seven degree-of-freedom research manipulator arm. 12 refs., 4 figs.« less