Sample records for sensor agent robot

  1. Multiagent robotic systems' ambient light sensor

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Maslennikov, Oleg S.; Komarov, Igor I.

    2017-05-01

    Swarm robotics is one of the fastest growing areas of modern technology. Being subclass of multi-agent systems it inherits the main part of scientific-methodological apparatus of construction and functioning of practically useful complexes, which consist of rather autonomous independent agents. Ambient light sensors (ALS) are widely used in robotics. But speaking about swarm robotics, the technology which has great number of specific features and is developing, we can't help mentioning that its important to use sensors on each robot not only in order to help it to get directionally oriented, but also to follow light emitted by robot-chief or to help to find the goal easier. Key words: ambient light sensor, swarm system, multiagent system, robotic system, robotic complexes, simulation modelling

  2. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    NASA Astrophysics Data System (ADS)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  3. Hybrid Exploration Agent Platform and Sensor Web System

    NASA Technical Reports Server (NTRS)

    Stoffel, A. William; VanSteenberg, Michael E.

    2004-01-01

    A sensor web to collect the scientific data needed to further exploration is a major and efficient asset to any exploration effort. This is true not only for lunar and planetary environments, but also for interplanetary and liquid environments. Such a system would also have myriad direct commercial spin-off applications. The Hybrid Exploration Agent Platform and Sensor Web or HEAP-SW like the ANTS concept is a Sensor Web concept. The HEAP-SW is conceptually and practically a very different system. HEAP-SW is applicable to any environment and a huge range of exploration tasks. It is a very robust, low cost, high return, solution to a complex problem. All of the technology for initial development and implementation is currently available. The HEAP Sensor Web or HEAP-SW consists of three major parts, The Hybrid Exploration Agent Platforms or HEAP, the Sensor Web or SW and the immobile Data collection and Uplink units or DU. The HEAP-SW as a whole will refer to any group of mobile agents or robots where each robot is a mobile data collection unit that spends most of its time acting in concert with all other robots, DUs in the web, and the HEAP-SWs overall Command and Control (CC) system. Each DU and robot is, however, capable of acting independently. The three parts of the HEAP-SW system are discussed in this paper. The Goals of the HEAP-SW system are: 1) To maximize the amount of exploration enhancing science data collected; 2) To minimize data loss due to system malfunctions; 3) To minimize or, possibly, eliminate the risk of total system failure; 4) To minimize the size, weight, and power requirements of each HEAP robot; 5) To minimize HEAP-SW system costs. The rest of this paper discusses how these goals are attained.

  4. Robotics technology discipline

    NASA Technical Reports Server (NTRS)

    Montemerlo, Melvin D.

    1990-01-01

    Viewgraphs on robotics technology discipline for Space Station Freedom are presented. Topics covered include: mechanisms; sensors; systems engineering processes for integrated robotics; man/machine cooperative control; 3D-real-time machine perception; multiple arm redundancy control; manipulator control from a movable base; multi-agent reasoning; and surfacing evolution technologies.

  5. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    NASA Astrophysics Data System (ADS)

    Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.

    1997-09-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.

  6. Minimal Representation and Decision Making for Networked Autonomous Agents

    DTIC Science & Technology

    2015-08-27

    to a multi-vehicle version of the Travelling Salesman Problem (TSP). We further provided a direct formula for computing the number of robots...the sensor. As a first stab at this, the two-agent rendezvous problem is considered where one agent (the target) is equipped with no sensors and is...by the total distance traveled by all agents. For agents with limited sensing and communication capabilities, we give a formula that computes the

  7. Searching Dynamic Agents with a Team of Mobile Robots

    PubMed Central

    Juliá, Miguel; Gil, Arturo; Reinoso, Oscar

    2012-01-01

    This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach. PMID:23012519

  8. Searching dynamic agents with a team of mobile robots.

    PubMed

    Juliá, Miguel; Gil, Arturo; Reinoso, Oscar

    2012-01-01

    This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach.

  9. Evaluating the Dynamics of Agent-Environment Interaction

    DTIC Science & Technology

    2001-05-01

    a color sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangulation system for...Cooperative Mobile Robot Control’, Autonomous Robots 4(4), 387{403. Vaughan, R. T., Sty, K., Sukhatme, G. S. & Mataric, M. J. (2000), Whistling in the Dark...sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangu- lation system for

  10. Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation

    NASA Technical Reports Server (NTRS)

    Birge, Brian

    2012-01-01

    The objective is to simulate and optimize distributed cooperation among a network of robots tasked with cooperative excavation on an extra-terrestrial surface. Additionally to examine the concept of directed Emergence among a group of limited artificially intelligent agents. Emergence is the concept of achieving complex results from very simple rules or interactions. For example, in a termite mound each individual termite does not carry a blueprint of how to make their home in a global sense, but their interactions based strictly on local desires create a complex superstructure. Leveraging this Emergence concept applied to a simulation of cooperative agents (robots) will allow an examination of the success of non-directed group strategy achieving specific results. Specifically the simulation will be a testbed to evaluate population based robotic exploration and cooperative strategies while leveraging the evolutionary teamwork approach in the face of uncertainty about the environment and partial loss of sensors. Checking against a cost function and 'social' constraints will optimize cooperation when excavating a simulated tunnel. Agents will act locally with non-local results. The rules by which the simulated robots interact will be optimized to the simplest possible for the desired result, leveraging Emergence. Sensor malfunction and line of sight issues will be incorporated into the simulation. This approach falls under Swarm Robotics, a subset of robot control concerned with finding ways to control large groups of robots. Swarm Robotics often contains biologically inspired approaches, research comes from social insect observation but also data from among groups of herding, schooling, and flocking animals. Biomimetic algorithms applied to manned space exploration is the method under consideration for further study.

  11. Robust Agent Control of an Autonomous Robot with Many Sensors and Actuators

    DTIC Science & Technology

    1993-05-01

    Overview 22 3.1 Issues of Controller Design ........................ 22 3.2 Robot Behavior Control Philosophy .................. 23 3.3 Overview of the... designed and built by our lab as an 9 Figure 1.1- Hannibal. 10 experimental platform to explore planetary micro-rover control issues (Angle 1991). When... designing the robot, careful consideration was given to mobility, sensing, and robustness issues. Much has been said concerning the advan- tages of

  12. Physical Scaffolding Accelerates the Evolution of Robot Behavior.

    PubMed

    Buckingham, David; Bongard, Josh

    2017-01-01

    In some evolutionary robotics experiments, evolved robots are transferred from simulation to reality, while sensor/motor data flows back from reality to improve the next transferral. We envision a generalization of this approach: a simulation-to-reality pipeline. In this pipeline, increasingly embodied agents flow up through a sequence of increasingly physically realistic simulators, while data flows back down to improve the next transferral between neighboring simulators; physical reality is the last link in this chain. As a first proof of concept, we introduce a two-link chain: A fast yet low-fidelity ( lo-fi) simulator hosts minimally embodied agents, which gradually evolve controllers and morphologies to colonize a slow yet high-fidelity ( hi-fi) simulator. The agents are thus physically scaffolded. We show here that, given the same computational budget, these physically scaffolded robots reach higher performance in the hi-fi simulator than do robots that only evolve in the hi-fi simulator, but only for a sufficiently difficult task. These results suggest that a simulation-to-reality pipeline may strike a good balance between accelerating evolution in simulation while anchoring the results in reality, free the investigator from having to prespecify the robot's morphology, and pave the way to scalable, automated, robot-generating systems.

  13. Babybot: a biologically inspired developing robotic agent

    NASA Astrophysics Data System (ADS)

    Metta, Giorgio; Panerai, Francesco M.; Sandini, Giulio

    2000-10-01

    The study of development, either artificial or biological, can highlight the mechanisms underlying learning and adaptive behavior. We shall argue whether developmental studies might provide a different and potentially interesting perspective either on how to build an artificial adaptive agent, or on understanding how the brain solves sensory, motor, and cognitive tasks. It is our opinion that the acquisition of the proper behavior might indeed be facilitated because within an ecological context, the agent, its adaptive structure and the environment dynamically interact thus constraining the otherwise difficult learning problem. In very general terms we shall describe the proposed approach and supporting biological related facts. In order to further analyze these aspects from the modeling point of view, we shall demonstrate how a twelve degrees of freedom baby humanoid robot acquires orienting and reaching behaviors, and what advantages the proposed framework might offer. In particular, the experimental setup consists of five degrees-of-freedom (dof) robot head, and an off-the-shelf six dof robot manipulator, both mounted on a rotating base: i.e. the torso. From the sensory point of view, the robot is equipped with two space-variant cameras, an inertial sensor simulating the vestibular system, and proprioceptive information through motor encoders. The biological parallel is exploited at many implementation levels. It is worth mentioning, for example, the space- variant eyes, exploiting foveal and peripheral vision in a single arrangement, the inertial sensor providing efficient image stabilization (vestibulo-ocular reflex).

  14. Recognition of flow in everyday life using sensor agent robot with laser range finder

    NASA Astrophysics Data System (ADS)

    Goshima, Misa; Mita, Akira

    2011-04-01

    In the present paper, we suggest an algorithm for a sensor agent robot with a laser range finder to recognize the flows of residents in the living spaces in order to achieve flow recognition in the living spaces, recognition of the number of people in spaces, and the classification of the flows. House reform is or will be demanded to prolong the lifetime of the home. Adaption for the individuals is needed for our aging society which is growing at a rapid pace. Home autonomous mobile robots will become popular in the future for aged people to assist them in various situations. Therefore we have to collect various type of information of human and living spaces. However, a penetration in personal privacy must be avoided. It is essential to recognize flows in everyday life in order to assist house reforms and aging societies in terms of adaption for the individuals. With background subtraction, extra noise removal, and the clustering based k-means method, we got an average accuracy of more than 90% from the behavior from 1 to 3 persons, and also confirmed the reliability of our system no matter the position of the sensor. Our system can take advantages from autonomous mobile robots and protect the personal privacy. It hints at a generalization of flow recognition methods in the living spaces.

  15. Cooperative Robot Localization Using Event-Triggered Estimation

    NASA Astrophysics Data System (ADS)

    Iglesias Echevarria, David I.

    It is known that multiple robot systems that need to cooperate to perform certain activities or tasks incur in high energy costs that hinder their autonomous functioning and limit the benefits provided to humans by these kinds of platforms. This work presents a communications-based method for cooperative robot localization. Implementing concepts from event-triggered estimation, used with success in the field of wireless sensor networks but rarely to do robot localization, agents are able to only send measurements to their neighbors when the expected novelty in this information is high. Since all agents know the condition that triggers a measurement to be sent or not, the lack of a measurement is therefore informative and fused into state estimates. In the case agents do not receive either direct nor indirect measurements of all others, the agents employ a covariance intersection fusion rule in order to keep the local covariance error metric bounded. A comprehensive analysis of the proposed algorithm and its estimation performance in a variety of scenarios is performed, and the algorithm is compared to similar cooperative localization approaches. Extensive simulations are performed that illustrate the effectiveness of this method.

  16. Data management for biofied building

    NASA Astrophysics Data System (ADS)

    Matsuura, Kohta; Mita, Akira

    2015-03-01

    Recently, Smart houses have been studied by many researchers to satisfy individual demands of residents. However, they are not feasible yet as they are very costly and require many sensors to be embedded into houses. Therefore, we suggest "Biofied Building". In Biofied Building, sensor agent robots conduct sensing, actuation, and control in their house. The robots monitor many parameters of human lives such as walking postures and emotion continuously. In this paper, a prototype network system and a data model for practical application for Biofied Building is pro-posed. In the system, functions of robots and servers are divided according to service flows in Biofield Buildings. The data model is designed to accumulate both the building data and the residents' data. Data sent from the robots and data analyzed in the servers are automatically registered into the database. Lastly, feasibility of this system is verified through lighting control simulation performed in an office space.

  17. Biobotic insect swarm based sensor networks for search and rescue

    NASA Astrophysics Data System (ADS)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  18. On the Evolution of Behaviors through Embodied Imitation.

    PubMed

    Erbas, Mehmet D; Bull, Larry; Winfield, Alan F T

    2015-01-01

    This article describes research in which embodied imitation and behavioral adaptation are investigated in collective robotics. We model social learning in artificial agents with real robots. The robots are able to observe and learn each others' movement patterns using their on-board sensors only, so that imitation is embodied. We show that the variations that arise from embodiment allow certain behaviors that are better adapted to the process of imitation to emerge and evolve during multiple cycles of imitation. As these behaviors are more robust to uncertainties in the real robots' sensors and actuators, they can be learned by other members of the collective with higher fidelity. Three different types of learned-behavior memory have been experimentally tested to investigate the effect of memory capacity on the evolution of movement patterns, and results show that as the movement patterns evolve through multiple cycles of imitation, selection, and variation, the robots are able to, in a sense, agree on the structure of the behaviors that are imitated.

  19. Coordinating teams of autonomous vehicles: an architectural perspective

    NASA Astrophysics Data System (ADS)

    Czichon, Cary; Peterson, Robert W.; Mettala, Erik G.; Vondrak, Ivo

    2005-05-01

    In defense-related robotics research, a mission level integration gap exists between mission tasks (tactical) performed by ground, sea, or air applications and elementary behaviors enacted by processing, communications, sensors, and weaponry resources (platform specific). The gap spans ensemble (heterogeneous team) behaviors, automatic MOE/MOP tracking, and tactical task modeling/simulation for virtual and mixed teams comprised of robotic and human combatants. This study surveys robotic system architectures, compares approaches for navigating problem/state spaces by autonomous systems, describes an architecture for an integrated, repository-based modeling, simulation, and execution environment, and outlines a multi-tiered scheme for robotic behavior components that is agent-based, platform-independent, and extendable via plug-ins. Tools for this integrated environment, along with a distributed agent framework for collaborative task performance are being developed by a U.S. Army funded SBIR project (RDECOM Contract N61339-04-C-0005).

  20. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    NASA Astrophysics Data System (ADS)

    Hanford, Scott D.

    Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.

  1. Development and Evaluation of Sensor Concepts for Ageless Aerospace Vehicles: Report 6 - Development and Demonstration of a Self-Organizing Diagnostic System for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Batten, Adam; Edwards, Graeme; Gerasimov, Vadim; Hoschke, Nigel; Isaacs, Peter; Lewis, Chris; Moore, Richard; Oppolzer, Florien; Price, Don; Prokopenko, Mikhail; hide

    2010-01-01

    This report describes a significant advance in the capability of the CSIRO/NASA structural health monitoring Concept Demonstrator (CD). The main thrust of the work has been the development of a mobile robotic agent, and the hardware and software modifications and developments required to enable the demonstrator to operate as a single, self-organizing, multi-agent system. This single-robot system is seen as the forerunner of a system in which larger numbers of small robots perform inspection and repair tasks cooperatively, by self-organization. While the goal of demonstrating self-organized damage diagnosis was not fully achieved in the time available, much of the work required for the final element that enables the robot to point the video camera and transmit an image has been completed. A demonstration video of the CD and robotic systems operating will be made and forwarded to NASA.

  2. An Architecture for Controlling Multiple Robots

    NASA Technical Reports Server (NTRS)

    Aghazarian, Hrand; Pirjanian, Paolo; Schenker, Paul; Huntsberger, Terrance

    2004-01-01

    The Control Architecture for Multirobot Outpost (CAMPOUT) is a distributed-control architecture for coordinating the activities of multiple robots. In the CAMPOUT, multiple-agent activities and sensor-based controls are derived as group compositions and involve coordination of more basic controllers denoted, for present purposes, as behaviors. The CAMPOUT provides basic mechanistic concepts for representation and execution of distributed group activities. One considers a network of nodes that comprise behaviors (self-contained controllers) augmented with hyper-links, which are used to exchange information between the nodes to achieve coordinated activities. Group behavior is guided by a scripted plan, which encodes a conditional sequence of single-agent activities. Thus, higher-level functionality is composed by coordination of more basic behaviors under the downward task decomposition of a multi-agent planner

  3. Learning classifier systems for single and multiple mobile robots in unstructured environments

    NASA Astrophysics Data System (ADS)

    Bay, John S.

    1995-12-01

    The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.

  4. Architectural design and support for knowledge sharing across heterogeneous MAST systems

    NASA Astrophysics Data System (ADS)

    Arkin, Ronald C.; Garcia-Vergara, Sergio; Lee, Sung G.

    2012-06-01

    A novel approach for the sharing of knowledge between widely heterogeneous robotic agents is presented, drawing upon Gardenfors Conceptual Spaces approach [4]. The target microrobotic platforms considered are computationally, power, sensor, and communications impoverished compared to more traditional robotics platforms due to their small size. This produces novel challenges for the system to converge on an interpretation of events within the world, in this case specifically focusing on the task of recognizing the concept of a biohazard in an indoor setting.

  5. Homeostasis control of building environment using sensor agent robot

    NASA Astrophysics Data System (ADS)

    Nagahama, Eri; Mita, Akira

    2012-04-01

    A human centered system for building is demanded to meet variety of needs due to the diversification and maturation of society. Smart buildings and smart houses have been studied to satisfy this demand. However, it is difficult for such systems to respond flexibly to unexpected events and needs that are caused by aging and complicate emotion changes. With this regards, we suggest "Biofied Buildings". The goal for this research is to realize buildings that are safer, more comfortable and more energy-efficient by embedding adaptive functions of life into buildings. In this paper, we propose a new control system for building environments, focused on physiological adaptation, particularly homeostasis, endocrine system and immune system. Residents are used as living sensors and controllers in the control loop. A sensor agent robot is used to acquire resident's discomfort feeling, and to output hormone-like signals to activate devices to control the environments. The proposed system could control many devices without establishing complicated scenarios. Results obtained from some simulations and the demonstration experiments using an LED lighting system showed that the proposed system were able to achieve robust and stable control of environments without complicated scenarios.

  6. Investigations Into Internal and External Aspects of Dynamic Agent-Environment Couplings

    NASA Astrophysics Data System (ADS)

    Dautenhahn, Kerstin

    This paper originates from my work on `social agents'. An issue which I consider important to this kind of research is the dynamic coupling of an agent with its social and non-social environment. I hypothesize `internal dynamics' inside an agent as a basic step towards understanding. The paper therefore focuses on the internal and external dynamics which couple an agent to its environment. The issue of embodiment in animals and artifacts and its relation to `social dynamics' is discussed first. I argue that embodiment is linked to a concept of a body and is not necessarily given when running a control program on robot hardware. I stress the individual characteristics of an embodied cognitive system, as well as its social embeddedness. I outline the framework of a physical-psychological state space which changes dynamically in a self-modifying way as a holistic approach towards embodied human and artificial cognition. This framework is meant to discuss internal and external dynamics of an embodied, natural or artificial agent. In order to stress the importance of a dynamic memory I introduce the concept of an `autobiographical agent'. The second part of the paper gives an example of the implementation of a physical agent, a robot, which is dynamically coupled to its environment by balancing on a seesaw. For the control of the robot a behavior-oriented approach using the dynamical systems metaphor is used. The problem is studied through building a complete and co-adapted robot-environment system. A seesaw which varies its orientation with one or two degrees of freedom is used as the artificial `habitat'. The problem of stabilizing the body axis by active motion on a seesaw is solved by using two inclination sensors and a parallel, behavior-oriented control architecture. Some experiments are described which demonstrate the exploitation of the dynamics of the robot-environment system.

  7. Enabling private and public sector organizations as agents of homeland security

    NASA Astrophysics Data System (ADS)

    Glassco, David H. J.; Glassco, Jordan C.

    2006-05-01

    Homeland security and defense applications seek to reduce the risk of undesirable eventualities across physical space in real-time. With that functional requirement in mind, our work focused on the development of IP based agent telecommunication solutions for heterogeneous sensor / robotic intelligent "Things" that could be deployed across the internet. This paper explains how multi-organization information and device sharing alliances may be formed to enable organizations to act as agents of homeland security (in addition to other uses). Topics include: (i) using location-aware, agent based, real-time information sharing systems to integrate business systems, mobile devices, sensor and actuator based devices and embedded devices used in physical infrastructure assets, equipment and other man-made "Things"; (ii) organization-centric real-time information sharing spaces using on-demand XML schema formatted networks; (iii) object-oriented XML serialization as a methodology for heterogeneous device glue code; (iv) how complex requirements for inter / intra organization information and device ownership and sharing, security and access control, mobility and remote communication service, tailored solution life cycle management, service QoS, service and geographic scalability and the projection of remote physical presence (through sensing and robotics) and remote informational presence (knowledge of what is going elsewhere) can be more easily supported through feature inheritance with a rapid agent system development methodology; (v) how remote object identification and tracking can be supported across large areas; (vi) how agent synergy may be leveraged with analytics to complement heterogeneous device networks.

  8. The Multi-Agent Tactical Sentry: Designing and Delivering Robots to the CF

    DTIC Science & Technology

    2008-08-01

    believed that there is no substitute for having the man in the loop during military operations. After the World Trade Center and Tokyo subway attacks...and vibration isolation for the more sensitive components. Additional space was left to account for possible future changes to the primary sensors

  9. Decentralized sensor fusion for Ubiquitous Networking Robotics in Urban Areas.

    PubMed

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T J

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted.

  10. Simulation and animation of sensor-driven robots.

    PubMed

    Chen, C; Trivedi, M M; Bidlack, C R

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.

  11. Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas

    PubMed Central

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M.; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T.J.

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted. PMID:22294927

  12. Intelligent Agent Architectures: Reactive Planning Testbed

    NASA Technical Reports Server (NTRS)

    Rosenschein, Stanley J.; Kahn, Philip

    1993-01-01

    An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected.

  13. Simulation and animation of sensor-driven robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, C.; Trivedi, M.M.; Bidlack, C.R.

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aide the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the usersmore » visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.« less

  14. Integration of a sensor based multiple robot environment for space applications: The Johnson Space Center Teleoperator Branch Robotics Laboratory

    NASA Technical Reports Server (NTRS)

    Hwang, James; Campbell, Perry; Ross, Mike; Price, Charles R.; Barron, Don

    1989-01-01

    An integrated operating environment was designed to incorporate three general purpose robots, sensors, and end effectors, including Force/Torque Sensors, Tactile Array sensors, Tactile force sensors, and Force-sensing grippers. The design and implementation of: (1) the teleoperation of a general purpose PUMA robot; (2) an integrated sensor hardware/software system; (3) the force-sensing gripper control; (4) the host computer system for dual Robotic Research arms; and (5) the Ethernet integration are described.

  15. Adaptive Remote-Sensing Techniques Implementing Swarms of Mobile Agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asher, R.B.; Cameron, S.M.; Loubriel, G.M.

    1998-11-25

    In many situations, stand-off remote-sensing and hazard-interdiction techniques over realistic operational areas are often impractical "and difficult to characterize. An alternative approach is to implement an adap- tively deployable array of sensitive agent-specific devices. Our group has been studying the collective be- havior of an autonomous, multi-agent system applied to chedbio detection and related emerging threat applications, The current physics-based models we are using coordinate a sensor array for mukivanate sig- nal optimization and coverage as re,alized by a swarm of robots or mobile vehicles. These intelligent control systems integrate'glob"ally operating decision-making systems and locally cooperative learning neural net- worksmore » to enhance re+-timp operational responses to dynarnical environments examples of which include obstacle avoidance, res~onding to prevailing wind patterns, and overcoming other natural obscurants or in- terferences. Collectively',tkensor nefirons with simple properties, interacting according to basic community rules, can accomplish complex interconnecting functions such as generalization, error correction, pattern recognition, sensor fusion, and localization. Neural nets provide a greater degree of robusmess and fault tolerance than conventional systems in that minor variations or imperfections do not impair performance. The robotic platforms would be equipped with sensor devices that perform opticaI detection of biologicais in combination with multivariate chemical analysis tools based on genetic and neural network algorithms, laser-diode LIDAR analysis, ultra-wideband short-pulsed transmitting and receiving antennas, thermal im- a:ing sensors, and optical Communication technology providing robust data throughput pathways. Mission scenarios under consideration include ground penetrating radar (GPR) for detection of underground struc- tures, airborne systems, and plume migration and mitigation. We will describe our research in these areas anti give a status report on our progress.« less

  16. Attention control learning in the decision space using state estimation

    NASA Astrophysics Data System (ADS)

    Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid

    2016-05-01

    The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.

  17. Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung

    2018-03-05

    In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.

  18. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  19. Building entity models through observation and learning

    NASA Astrophysics Data System (ADS)

    Garcia, Richard; Kania, Robert; Fields, MaryAnne; Barnes, Laura

    2011-05-01

    To support the missions and tasks of mixed robotic/human teams, future robotic systems will need to adapt to the dynamic behavior of both teammates and opponents. One of the basic elements of this adaptation is the ability to exploit both long and short-term temporal data. This adaptation allows robotic systems to predict/anticipate, as well as influence, future behavior for both opponents and teammates and will afford the system the ability to adjust its own behavior in order to optimize its ability to achieve the mission goals. This work is a preliminary step in the effort to develop online entity behavior models through a combination of learning techniques and observations. As knowledge is extracted from the system through sensor and temporal feedback, agents within the multi-agent system attempt to develop and exploit a basic movement model of an opponent. For the purpose of this work, extraction and exploitation is performed through the use of a discretized two-dimensional game. The game consists of a predetermined number of sentries attempting to keep an unknown intruder agent from penetrating their territory. The sentries utilize temporal data coupled with past opponent observations to hypothesize the probable locations of the opponent and thus optimize their guarding locations.

  20. Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments

    PubMed Central

    Rhodes, Paul A.; Anderson, Todd O.

    2012-01-01

    To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772

  1. Supervisory control of mobile sensor networks: math formulation, simulation, and implementation.

    PubMed

    Giordano, Vincenzo; Ballal, Prasanna; Lewis, Frank; Turchiano, Biagio; Zhang, Jing Bing

    2006-08-01

    This paper uses a novel discrete-event controller (DEC) for the coordination of cooperating heterogeneous wireless sensor networks (WSNs) containing both unattended ground sensors (UGSs) and mobile sensor robots. The DEC sequences the most suitable tasks for each agent and assigns sensor resources according to the current perception of the environment. A matrix formulation makes this DEC particularly useful for WSN, where missions change and sensor agents may be added or may fail. WSN have peculiarities that complicate their supervisory control. Therefore, this paper introduces several new tools for DEC design and operation, including methods for generating the required supervisory matrices based on mission planning, methods for modifying the matrices in the event of failed nodes, or nodes entering the network, and a novel dynamic priority assignment weighting approach for selecting the most appropriate and useful sensors for a given mission task. The resulting DEC represents a complete dynamical description of the WSN system, which allows a fast programming of deployable WSN, a computer simulation analysis, and an efficient implementation. The DEC is actually implemented on an experimental wireless-sensor-network prototyping system. Both simulation and experimental results are presented to show the effectiveness and versatility of the developed control architecture.

  2. Collaborative autonomous sensing with Bayesians in the loop

    NASA Astrophysics Data System (ADS)

    Ahmed, Nisar

    2016-10-01

    There is a strong push to develop intelligent unmanned autonomy that complements human reasoning for applications as diverse as wilderness search and rescue, military surveillance, and robotic space exploration. More than just replacing humans for `dull, dirty and dangerous' work, autonomous agents are expected to cope with a whole host of uncertainties while working closely together with humans in new situations. The robotics revolution firmly established the primacy of Bayesian algorithms for tackling challenging perception, learning and decision-making problems. Since the next frontier of autonomy demands the ability to gather information across stretches of time and space that are beyond the reach of a single autonomous agent, the next generation of Bayesian algorithms must capitalize on opportunities to draw upon the sensing and perception abilities of humans-in/on-the-loop. This work summarizes our recent research toward harnessing `human sensors' for information gathering tasks. The basic idea behind is to allow human end users (i.e. non-experts in robotics, statistics, machine learning, etc.) to directly `talk to' the information fusion engine and perceptual processes aboard any autonomous agent. Our approach is grounded in rigorous Bayesian modeling and fusion of flexible semantic information derived from user-friendly interfaces, such as natural language chat and locative hand-drawn sketches. This naturally enables `plug and play' human sensing with existing probabilistic algorithms for planning and perception, and has been successfully demonstrated with human-robot teams in target localization applications.

  3. A remote assessment system with a vision robot and wearable sensors.

    PubMed

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  4. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-12-26

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  5. Learning to Predict Consequences as a Method of Knowledge Transfer in Reinforcement Learning.

    PubMed

    Chalmers, Eric; Contreras, Edgar Bermudez; Robertson, Brandon; Luczak, Artur; Gruber, Aaron

    2017-04-17

    The reinforcement learning (RL) paradigm allows agents to solve tasks through trial-and-error learning. To be capable of efficient, long-term learning, RL agents should be able to apply knowledge gained in the past to new tasks they may encounter in the future. The ability to predict actions' consequences may facilitate such knowledge transfer. We consider here domains where an RL agent has access to two kinds of information: agent-centric information with constant semantics across tasks, and environment-centric information, which is necessary to solve the task, but with semantics that differ between tasks. For example, in robot navigation, environment-centric information may include the robot's geographic location, while agent-centric information may include sensor readings of various nearby obstacles. We propose that these situations provide an opportunity for a very natural style of knowledge transfer, in which the agent learns to predict actions' environmental consequences using agent-centric information. These predictions contain important information about the affordances and dangers present in a novel environment, and can effectively transfer knowledge from agent-centric to environment-centric learning systems. Using several example problems including spatial navigation and network routing, we show that our knowledge transfer approach can allow faster and lower cost learning than existing alternatives.

  6. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  7. Laser speckle velocimetry for robot manufacturing

    NASA Astrophysics Data System (ADS)

    Charrett, Thomas O. H.; Bandari, Yashwanth K.; Michel, Florent; Ding, Jialuo; Williams, Stewart W.; Tatam, Ralph P.

    2017-06-01

    A non-contact speckle correlation sensor for the measurement of robotic tool speed is presented for use in robotic manufacturing and is capable of measuring the in-plane relative velocities between a robot end-effector and the workpiece or other surface. The sensor performance was assessed in the laboratory with the sensor accuracies found to be better than 0:01 mm/s over a 70 mm/s velocity range. Finally an example of the sensors application to robotic manufacturing is presented where the sensor was applied to tool speed measurement for path planning in the wire and arc additive manufacturing process using a KUKA KR150 L110/2 industrial robot.

  8. Control of Synchronization Regimes in Networks of Mobile Interacting Agents

    NASA Astrophysics Data System (ADS)

    Perez-Diaz, Fernando; Zillmer, Ruediger; Groß, Roderich

    2017-05-01

    We investigate synchronization in a population of mobile pulse-coupled agents with a view towards implementations in swarm-robotics systems and mobile sensor networks. Previous theoretical approaches dealt with range and nearest-neighbor interactions. In the latter case, a synchronization-hindering regime for intermediate agent mobility is found. We investigate the robustness of this intermediate regime under practical scenarios. We show that synchronization in the intermediate regime can be predicted by means of a suitable metric of the phase response curve. Furthermore, we study more-realistic K -nearest-neighbor and cone-of-vision interactions, showing that it is possible to control the extent of the synchronization-hindering region by appropriately tuning the size of the neighborhood. To assess the effect of noise, we analyze the propagation of perturbations over the network and draw an analogy between the response in the hindering regime and stable chaos. Our findings reveal the conditions for the control of clock or activity synchronization of agents with intermediate mobility. In addition, the emergence of the intermediate regime is validated experimentally using a swarm of physical robots interacting with cone-of-vision interactions.

  9. Tier-scalable reconnaissance: the future in autonomous C4ISR systems has arrived: progress towards an outdoor testbed

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; Brooks, Alexander J.-W.; Tarbell, Mark A.; Dohm, James M.

    2017-05-01

    Autonomous reconnaissance missions are called for in extreme environments, as well as in potentially hazardous (e.g., the theatre, disaster-stricken areas, etc.) or inaccessible operational areas (e.g., planetary surfaces, space). Such future missions will require increasing degrees of operational autonomy, especially when following up on transient events. Operational autonomy encompasses: (1) Automatic characterization of operational areas from different vantages (i.e., spaceborne, airborne, surface, subsurface); (2) automatic sensor deployment and data gathering; (3) automatic feature extraction including anomaly detection and region-of-interest identification; (4) automatic target prediction and prioritization; (5) and subsequent automatic (re-)deployment and navigation of robotic agents. This paper reports on progress towards several aspects of autonomous C4ISR systems, including: Caltech-patented and NASA award-winning multi-tiered mission paradigm, robotic platform development (air, ground, water-based), robotic behavior motifs as the building blocks for autonomous tele-commanding, and autonomous decision making based on a Caltech-patented framework comprising sensor-data-fusion (feature-vectors), anomaly detection (clustering and principal component analysis), and target prioritization (hypothetical probing).

  10. Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)

    PubMed Central

    Tang, Hong; Li, Liangzhi; Xiao, Nanfeng

    2017-01-01

    Although many researchers have begun to study the area of Cyber Physical Social Sensing (CPSS), few are focused on robotic sensors. We successfully utilize robots in CPSS, and propose a sensor trajectory planning method in this paper. Trajectory planning is a fundamental problem in mobile robotics. However, traditional methods are not suited for robotic sensors, because of their low efficiency, instability, and non-smooth-generated paths. This paper adopts an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial which can output a smooth trajectory for the robotic sensor. Simulations demonstrate that our approach is robust and efficient, and can be well applied in the CPSS field. PMID:28218649

  11. Mobile robot navigation modulated by artificial emotions.

    PubMed

    Lee-Johnson, C P; Carnegie, D A

    2010-04-01

    For artificial intelligence research to progress beyond the highly specialized task-dependent implementations achievable today, researchers may need to incorporate aspects of biological behavior that have not traditionally been associated with intelligence. Affective processes such as emotions may be crucial to the generalized intelligence possessed by humans and animals. A number of robots and autonomous agents have been created that can emulate human emotions, but the majority of this research focuses on the social domain. In contrast, we have developed a hybrid reactive/deliberative architecture that incorporates artificial emotions to improve the general adaptive performance of a mobile robot for a navigation task. Emotions are active on multiple architectural levels, modulating the robot's decisions and actions to suit the context of its situation. Reactive emotions interact with the robot's control system, altering its parameters in response to appraisals from short-term sensor data. Deliberative emotions are learned associations that bias path planning in response to eliciting objects or events. Quantitative results are presented that demonstrate situations in which each artificial emotion can be beneficial to performance.

  12. Analysis of decentralized variable structure control for collective search by mobile robots

    NASA Astrophysics Data System (ADS)

    Goldsmith, Steven Y.; Feddema, John T.; Robinett, Rush D., III

    1998-10-01

    This paper presents an analysis of a decentralized coordination strategy for organizing and controlling a team of mobile robots performing collective search. The alpha- beta coordination strategy is a family of collective search algorithms that allow teams of communicating robots to implicitly coordinate their search activities through a division of labor based on self-selected roles. In an alpha- beta team, alpha agents are motivated to improve their status by exploring new regions of the search space. Beta agents are conservative, and rely on the alpha agents to provide advanced information on favorable regions of the search space. An agent selects its current role dynamically based on its current status value relative to the current status values of the other team members. Status is determined by some function of the agent's sensor readings, and is generally a measurement of source intensity at the agent's current location. Variations on the decision rules determining alpha and beta behavior produce different versions of the algorithm that lead to different global properties. The alpha-beta strategy is based on a simple finite-state machine that implements a form of Variable Structure Control (VSC). The VSC system changes the dynamics of the collective system by abruptly switching at defined states to alternative control laws. In VSC, Lyapunov's direct method is often used to design control surfaces which guide the system to a given goal. We introduce the alpha- beta algorithm and present an analysis of the equilibrium point and the global stability of the alpha-beta algorithm based on Lyapunov's method.

  13. Cooperative crossing of traffic intersections in a distributed robot system

    NASA Astrophysics Data System (ADS)

    Rausch, Alexander; Oswald, Norbert; Levi, Paul

    1995-09-01

    In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.

  14. Design, implementation and evaluation of an independent real-time safety layer for medical robotic systems using a force-torque-acceleration (FTA) sensor.

    PubMed

    Richter, Lars; Bruder, Ralf

    2013-05-01

    Most medical robotic systems require direct interaction or contact with the robot. Force-Torque (FT) sensors can easily be mounted to the robot to control the contact pressure. However, evaluation is often done in software, which leads to latencies. To overcome that, we developed an independent safety system, named FTA sensor, which is based on an FT sensor and an accelerometer. An embedded system (ES) runs a real-time monitoring system for continuously checking of the readings. In case of a collision or error, it instantaneously stops the robot via the robot's external emergency stop. We found that the ES implementing the FTA sensor has a maximum latency of [Formula: see text] ms to trigger the robot's emergency stop. For the standard settings in the application of robotized transcranial magnetic stimulation, the robot will stop after at most 4 mm. Therefore, it works as an independent safety layer preventing patient and/or operator from serious harm.

  15. Method and System for Controlling a Dexterous Robot Execution Sequence Using State Classification

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Quillin, Nathaniel (Inventor); Platt, Robert J., Jr. (Inventor); Pfeiffer, Joseph (Inventor); Permenter, Frank Noble (Inventor)

    2014-01-01

    A robotic system includes a dexterous robot and a controller. The robot includes a plurality of robotic joints, actuators for moving the joints, and sensors for measuring a characteristic of the joints, and for transmitting the characteristics as sensor signals. The controller receives the sensor signals, and is configured for executing instructions from memory, classifying the sensor signals into distinct classes via the state classification module, monitoring a system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the system state. A method for controlling the robot in the above system includes receiving the signals via the controller, classifying the signals using the state classification module, monitoring the present system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the present system state.

  16. A Pneumatic Tactile Sensor for Co-Operative Robots

    PubMed Central

    He, Rui; Yu, Jianjun; Zuo, Guoyu

    2017-01-01

    Tactile sensors of comprehensive functions are urgently needed for the advanced robot to co-exist and co-operate with human beings. Pneumatic tactile sensors based on air bladder possess some noticeable advantages for human-robot interaction application. In this paper, we construct a pneumatic tactile sensor and apply it on the fingertip of robot hand to realize the sensing of force, vibration and slippage via the change of the pressure of the air bladder, and we utilize the sensor to perceive the object’s features such as softness and roughness. The pneumatic tactile sensor has good linearity, repeatability and low hysteresis and both its size and sensing range can be customized by using different material as well as different thicknesses of the air bladder. It is also simple and cheap to fabricate. Therefore, the pneumatic tactile sensor is suitable for the application of co-operative robots and can be widely utilized to improve the performance of service robots. We can apply it to the fingertip of the robot to endow the robotic hand with the ability to co-operate with humans and handle the fragile objects because of the inherent compliance of the air bladder. PMID:29125565

  17. Two Formal Gas Models For Multi-Agent Sweeping and Obstacle Avoidance

    NASA Technical Reports Server (NTRS)

    Kerr, Wesley; Spears, Diana; Spears, William; Thayer, David

    2004-01-01

    The task addressed here is a dynamic search through a bounded region, while avoiding multiple large obstacles, such as buildings. In the case of limited sensors and communication, maintaining spatial coverage - especially after passing the obstacles - is a challenging problem. Here, we investigate two physics-based approaches to solving this task with multiple simulated mobile robots, one based on artificial forces and the other based on the kinetic theory of gases. The desired behavior is achieved with both methods, and a comparison is made between them. Because both approaches are physics-based, formal assurances about the multi-robot behavior are straightforward, and are included in the paper.

  18. Long-Term Simultaneous Localization and Mapping in Dynamic Environments

    DTIC Science & Technology

    2015-01-01

    core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the...and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot???s sensory...distributed stochastic neighbor embedding x ABSTRACT One of the core competencies required for autonomous mobile robotics is the ability to use sensors

  19. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    PubMed Central

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  20. Small UGV platforms for unattended sensors

    NASA Astrophysics Data System (ADS)

    Smuda, Bill; Gerhart, Grant

    2005-10-01

    The wars in Iraq and Afghanistan have shown the importance of sensor and robotic technology as a force multiplier and a tool for moving soldiers out of harms way. Situations on the ground make soldiers easy targets for snipers and suicide bombers. Sensors and robotics technology reduces risk to soldiers and other personnel at checkpoints, in access areas and on convoy routes. Early user involvement in innovative and aggressive acquisition and development strategies are the key to moving sensor and robotic and associated technology into the hands of the user, the soldier on the ground. This paper discusses activity associated with rapid development of the robotics, sensors and our field experience with robotics in Iraq and Afghanistan.

  1. A Remote Lab for Experiments with a Team of Mobile Robots

    PubMed Central

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-01-01

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab. PMID:25192316

  2. A remote lab for experiments with a team of mobile robots.

    PubMed

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-09-04

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab.

  3. Intelligent lead: a novel HRI sensor for guide robots.

    PubMed

    Cho, Keum-Bae; Lee, Beom-Hee

    2012-01-01

    This paper addresses the introduction of a new Human Robot Interaction (HRI) sensor for guide robots. Guide robots for geriatric patients or the visually impaired should follow user's control command, keeping a certain desired distance allowing the user to work freely. Therefore, it is necessary to acquire control commands and a user's position on a real-time basis. We suggest a new sensor fusion system to achieve this objective and we will call this sensor the "intelligent lead". The objective of the intelligent lead is to acquire a stable distance from the user to the robot, speed-control volume and turn-control volume, even when the robot platform with the intelligent lead is shaken on uneven ground. In this paper we explain a precise Extended Kalman Filter (EKF) procedure for this. The intelligent lead physically consists of a Kinect sensor, the serial linkage attached with eight rotary encoders, and an IMU (Inertial Measurement Unit) and their measurements are fused by the EKF. A mobile robot was designed to test the performance of the proposed sensor system. After installing the intelligent lead in the mobile robot, several tests are conducted to verify that the mobile robot with the intelligent lead is capable of achieving its goal points while maintaining the appropriate distance between the robot and the user. The results show that we can use the intelligent lead proposed in this paper as a new HRI sensor joined a joystick and a distance measure in the mobile environments such as the robot and the user are moving at the same time.

  4. Distributed data fusion across multiple hard and soft mobile sensor platforms

    NASA Astrophysics Data System (ADS)

    Sinsley, Gregory

    One of the biggest challenges currently facing the robotics field is sensor data fusion. Unmanned robots carry many sophisticated sensors including visual and infrared cameras, radar, laser range finders, chemical sensors, accelerometers, gyros, and global positioning systems. By effectively fusing the data from these sensors, a robot would be able to form a coherent view of its world that could then be used to facilitate both autonomous and intelligent operation. Another distinct fusion problem is that of fusing data from teammates with data from onboard sensors. If an entire team of vehicles has the same worldview they will be able to cooperate much more effectively. Sharing worldviews is made even more difficult if the teammates have different sensor types. The final fusion challenge the robotics field faces is that of fusing data gathered by robots with data gathered by human teammates (soft sensors). Humans sense the world completely differently from robots, which makes this problem particularly difficult. The advantage of fusing data from humans is that it makes more information available to the entire team, thus helping each agent to make the best possible decisions. This thesis presents a system for fusing data from multiple unmanned aerial vehicles, unmanned ground vehicles, and human observers. The first issue this thesis addresses is that of centralized data fusion. This is a foundational data fusion issue, which has been very well studied. Important issues in centralized fusion include data association, classification, tracking, and robotics problems. Because these problems are so well studied, this thesis does not make any major contributions in this area, but does review it for completeness. The chapter on centralized fusion concludes with an example unmanned aerial vehicle surveillance problem that demonstrates many of the traditional fusion methods. The second problem this thesis addresses is that of distributed data fusion. Distributed data fusion is a younger field than centralized fusion. The main issues in distributed fusion that are addressed are distributed classification and distributed tracking. There are several well established methods for performing distributed fusion that are first reviewed. The chapter on distributed fusion concludes with a multiple unmanned vehicle collaborative test involving an unmanned aerial vehicle and an unmanned ground vehicle. The third issue this thesis addresses is that of soft sensor only data fusion. Soft-only fusion is a newer field than centralized or distributed hard sensor fusion. Because of the novelty of the field, the chapter on soft only fusion contains less background information and instead focuses on some new results in soft sensor data fusion. Specifically, it discusses a novel fuzzy logic based soft sensor data fusion method. This new method is tested using both simulations and field measurements. The biggest issue addressed in this thesis is that of combined hard and soft fusion. Fusion of hard and soft data is the newest area for research in the data fusion community; therefore, some of the largest theoretical contributions in this thesis are in the chapter on combined hard and soft fusion. This chapter presents a novel combined hard and soft data fusion method based on random set theory, which processes random set data using a particle filter. Furthermore, the particle filter is designed to be distributed across multiple robots and portable computers (used by human observers) so that there is no centralized failure point in the system. After laying out a theoretical groundwork for hard and soft sensor data fusion the thesis presents practical applications for hard and soft sensor data fusion in simulation. Through a series of three progressively more difficult simulations, some important hard and soft sensor data fusion capabilities are demonstrated. The first simulation demonstrates fusing data from a single soft sensor and a single hard sensor in order to track a car that could be driving normally or erratically. The second simulation adds the extra complication of classifying the type of target to the simulation. The third simulation uses multiple hard and soft sensors, with a limited field of view, to track a moving target and classify it as a friend, foe, or neutral. The final chapter builds on the work done in previous chapters by performing a field test of the algorithms for hard and soft sensor data fusion. The test utilizes an unmanned aerial vehicle, an unmanned ground vehicle, and a human observer with a laptop. The test is designed to mimic a collaborative human and robot search and rescue problem. This test makes some of the most important practical contributions of the thesis by showing that the algorithms that have been developed for hard and soft sensor data fusion are capable of running in real time on relatively simple hardware.

  5. Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    PubMed Central

    Garcia, Gabriel J.; Corrales, Juan A.; Pomares, Jorge; Torres, Fernando

    2009-01-01

    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors. PMID:22303146

  6. Energy optimization in mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Yu, Shengwei

    Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.

  7. Analysis of Decentralized Variable Structure Control for Collective Search by Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feddema, J.; Goldsmith, S.; Robinett, R.

    1998-11-04

    This paper presents an analysis of a decentralized coordination strategy for organizing and controlling a team of mobile robots performing collective search. The alpha-beta coordination strategy is a family of collective search algorithms that allow teams of communicating robots to implicitly coordinate their search activities through a division of labor based on self-selected roIes. In an alpha-beta team. alpha agents are motivated to improve their status by exploring new regions of the search space. Beta a~ents are conservative, and reiy on the alpha agents to provide advanced information on favorable regions of the search space. An agent selects its currentmore » role dynamically based on its current status value relative to the current status values of the other team members. Status is determined by some function of the agent's sensor readings, and is generally a measurement of source intensity at the agent's current location. Variations on the decision rules determining alpha and beta behavior produce different versions of the algorithm that lead to different global properties. The alpha-beta strategy is based on a simple finite-state machine that implements a form of Variable Structure Control (VSC). The VSC system changes the dynamics of the collective system by abruptly switching at defined states to alternative control laws . In VSC, Lyapunov's direct method is often used to design control surfaces which guide the system to a given goal. We introduce the alpha-beta aIgorithm and present an analysis of the equilibrium point and the global stability of the alpha-beta algorithm based on Lyapunov's method.« less

  8. Experimental Robot Position Sensor Fault Tolerance Using Accelerometers and Joint Torque Sensors

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. The proposed method uses joint torque sensors found in most existing advanced robot designs along with easily locatable, lightweight accelerometers to provide a joint position sensor fault recovery mode. This mode uses the torque sensors along with a virtual passive control law for stability and accelerometers for joint position information. Two methods for conversion from Cartesian acceleration to joint position based on robot kinematics, not integration, are presented. The fault tolerant control method was tested on several joints of a laboratory robot. The controllers performed well with noisy, biased data and a model with uncertain parameters.

  9. Omni-Directional Scanning Localization Method of a Mobile Robot Based on Ultrasonic Sensors.

    PubMed

    Mu, Wei-Yi; Zhang, Guang-Peng; Huang, Yu-Mei; Yang, Xin-Gang; Liu, Hong-Yan; Yan, Wen

    2016-12-20

    Improved ranging accuracy is obtained by the development of a novel ultrasonic sensor ranging algorithm, unlike the conventional ranging algorithm, which considers the divergence angle and the incidence angle of the ultrasonic sensor synchronously. An ultrasonic sensor scanning method is developed based on this algorithm for the recognition of an inclined plate and to obtain the localization of the ultrasonic sensor relative to the inclined plate reference frame. The ultrasonic sensor scanning method is then leveraged for the omni-directional localization of a mobile robot, where the ultrasonic sensors are installed on a mobile robot and follow the spin of the robot, the inclined plate is recognized and the position and posture of the robot are acquired with respect to the coordinate system of the inclined plate, realizing the localization of the robot. Finally, the localization method is implemented into an omni-directional scanning localization experiment with the independently researched and developed mobile robot. Localization accuracies of up to ±3.33 mm for the front, up to ±6.21 for the lateral and up to ±0.20° for the posture are obtained, verifying the correctness and effectiveness of the proposed localization method.

  10. Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor

    NASA Technical Reports Server (NTRS)

    Prinz, F. B.

    1991-01-01

    Sensor based robot motion planning research has primarily focused on mobile robots. Consider, however, the case of a robot manipulator expected to operate autonomously in a dynamic environment where unexpected collisions can occur with many parts of the robot. Only a sensor based system capable of generating collision free paths would be acceptable in such situations. Recently, work in this area has been reported in which a deterministic solution for 2DOF systems has been generated. The arm was sensitized with 'skin' of infra-red sensors. We have proposed a heuristic (potential field based) methodology for redundant robots with large DOF's. The key concepts are solving the path planning problem by cooperating global and local planning modules, the use of complete information from the sensors and partial (but appropriate) information from a world model, representation of objects with hyper-ellipsoids in the world model, and the use of variational planning. We intend to sensitize the robot arm with a 'skin' of capacitive proximity sensors. These sensors were developed at NASA, and are exceptionally suited for the space application. In the first part of the report, we discuss the development and modeling of the capacitive proximity sensor. In the second part we discuss the motion planning algorithm.

  11. Integration of Haptics in Agricultural Robotics

    NASA Astrophysics Data System (ADS)

    Kannan Megalingam, Rajesh; Sreekanth, M. M.; Sivanantham, Vinu; Sai Kumar, K.; Ghanta, Sriharsha; Surya Teja, P.; Reddy, Rajesh G.

    2017-08-01

    Robots can differentiate with open loop system and closed loop system robots. We face many problems when we do not have a feedback from robots. In this research paper, we are discussing all possibilities to achieve complete closed loop system for Multiple-DOF Robotic Arm, which is used in a coconut tree climbing and cutting robot by introducing a Haptic device. We are working on various sensors like tactile, vibration, force and proximity sensors for getting feedback. For monitoring the robotic arm achieved by graphical user interference software which simulates the working of the robotic arm, send the feedback of all the real time analog values which are produced by various sensors and provide real-time graphs for estimate the efficiency of the Robot.

  12. An interactive control algorithm used for equilateral triangle formation with robotic sensors.

    PubMed

    Li, Xiang; Chen, Hongcai

    2014-04-22

    This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs.

  13. An Interactive Control Algorithm Used for Equilateral Triangle Formation with Robotic Sensors

    PubMed Central

    Li, Xiang; Chen, Hongcai

    2014-01-01

    This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs. PMID:24759118

  14. Piezoresistive pressure sensor array for robotic skin

    NASA Astrophysics Data System (ADS)

    Mirza, Fahad; Sahasrabuddhe, Ritvij R.; Baptist, Joshua R.; Wijesundara, Muthu B. J.; Lee, Woo H.; Popa, Dan O.

    2016-05-01

    Robots are starting to transition from the confines of the manufacturing floor to homes, schools, hospitals, and highly dynamic environments. As, a result, it is impossible to foresee all the probable operational situations of robots, and preprogram the robot behavior in those situations. Among human-robot interaction technologies, haptic communication is an intuitive physical interaction method that can help define operational behaviors for robots cooperating with humans. Multimodal robotic skin with distributed sensors can help robots increase perception capabilities of their surrounding environments. Electro-Hydro-Dynamic (EHD) printing is a flexible multi-modal sensor fabrication method because of its direct printing capability of a wide range of materials onto substrates with non-uniform topographies. In past work we designed interdigitated comb electrodes as a sensing element and printed piezoresistive strain sensors using customized EHD printable PEDOT:PSS based inks. We formulated a PEDOT:PSS derivative ink, by mixing PEDOT:PSS and DMSO. Bending induced characterization tests of prototyped sensors showed high sensitivity and sufficient stability. In this paper, we describe SkinCells, robot skin sensor arrays integrated with electronic modules. 4x4 EHD-printed arrays of strain sensors was packaged onto Kapton sheets and silicone encapsulant and interconnected to a custom electronic module that consists of a microcontroller, Wheatstone bridge with adjustable digital potentiometer, multiplexer, and serial communication unit. Thus, SkinCell's electronics can be used for signal acquisition, conditioning, and networking between sensor modules. Several SkinCells were loaded with controlled pressure, temperature and humidity testing apparatuses, and testing results are reported in this paper.

  15. Algorithms and Sensors for Small Robot Path Following

    NASA Technical Reports Server (NTRS)

    Hogg, Robert W.; Rankin, Arturo L.; Roumeliotis, Stergios I.; McHenry, Michael C.; Helmick, Daniel M.; Bergh, Charles F.; Matthies, Larry

    2002-01-01

    Tracked mobile robots in the 20 kg size class are under development for applications in urban reconnaissance. For efficient deployment, it is desirable for teams of robots to be able to automatically execute path following behaviors, with one or more followers tracking the path taken by a leader. The key challenges to enabling such a capability are (l) to develop sensor packages for such small robots that can accurately determine the path of the leader and (2) to develop path following algorithms for the subsequent robots. To date, we have integrated gyros, accelerometers, compass/inclinometers, odometry, and differential GPS into an effective sensing package. This paper describes the sensor package, sensor processing algorithm, and path tracking algorithm we have developed for the leader/follower problem in small robots and shows the result of performance characterization of the system. We also document pragmatic lessons learned about design, construction, and electromagnetic interference issues particular to the performance of state sensors on small robots.

  16. Multi-layer robot skin with embedded sensors and muscles

    NASA Astrophysics Data System (ADS)

    Tomar, Ankit; Tadesse, Yonas

    2016-04-01

    Soft artificial skin with embedded sensors and actuators is proposed for a crosscutting study of cognitive science on a facial expressive humanoid platform. This paper focuses on artificial muscles suitable for humanoid robots and prosthetic devices for safe human-robot interactions. Novel composite artificial skin consisting of sensors and twisted polymer actuators is proposed. The artificial skin is conformable to intricate geometries and includes protective layers, sensor layers, and actuation layers. Fluidic channels are included in the elastomeric skin to inject fluids in order to control actuator response time. The skin can be used to develop facially expressive humanoid robots or other soft robots. The humanoid robot can be used by computer scientists and other behavioral science personnel to test various algorithms, and to understand and develop more perfect humanoid robots with facial expression capability. The small-scale humanoid robots can also assist ongoing therapeutic treatment research with autistic children. The multilayer skin can be used for many soft robots enabling them to detect both temperature and pressure, while actuating the entire structure.

  17. Scalable fabric tactile sensor arrays for soft bodies

    NASA Astrophysics Data System (ADS)

    Day, Nathan; Penaloza, Jimmy; Santos, Veronica J.; Killpack, Marc D.

    2018-06-01

    Soft robots have the potential to transform the way robots interact with their environment. This is due to their low inertia and inherent ability to more safely interact with the world without damaging themselves or the people around them. However, existing sensing for soft robots has at least partially limited their ability to control interactions with their environment. Tactile sensors could enable soft robots to sense interaction, but most tactile sensors are made from rigid substrates and are not well suited to applications for soft robots which can deform. In addition, the benefit of being able to cheaply manufacture soft robots may be lost if the tactile sensors that cover them are expensive and their resolution does not scale well for manufacturability. This paper discusses the development of a method to make affordable, high-resolution, tactile sensor arrays (manufactured in rows and columns) that can be used for sensorizing soft robots and other soft bodies. However, the construction results in a sensor array that exhibits significant amounts of cross-talk when two taxels in the same row are compressed. Using the same fabric-based tactile sensor array construction design, two different methods for cross-talk compensation are presented. The first uses a mathematical model to calculate a change in resistance of each taxel directly. The second method introduces additional simple circuit components that enable us to isolate each taxel electrically and relate voltage to force directly. Fabric sensor arrays are demonstrated for two different soft-bodied applications: an inflatable single link robot and a human wrist.

  18. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots

    PubMed Central

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-01-01

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed. PMID:29186843

  19. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots.

    PubMed

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-11-25

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.

  20. Market-Based Coordination and Auditing Mechanisms for Self-Interested Multi-Robot Systems

    ERIC Educational Resources Information Center

    Ham, MyungJoo

    2009-01-01

    We propose market-based coordinated task allocation mechanisms, which allocate complex tasks that require synchronized and collaborated services of multiple robot agents to robot agents, and an auditing mechanism, which ensures proper behaviors of robot agents by verifying inter-agent activities, for self-interested, fully-distributed, and…

  1. SSTAC/ARTS review of the draft Integrated Technology Plan (ITP). Volume 8: Aerothermodynamics Automation and Robotics (A/R) systems sensors, high-temperature superconductivity

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Viewgraphs of briefings presented at the SSTAC/ARTS review of the draft Integrated Technology Plan (ITP) on aerothermodynamics, automation and robotics systems, sensors, and high-temperature superconductivity are included. Topics covered include: aerothermodynamics; aerobraking; aeroassist flight experiment; entry technology for probes and penetrators; automation and robotics; artificial intelligence; NASA telerobotics program; planetary rover program; science sensor technology; direct detector; submillimeter sensors; laser sensors; passive microwave sensing; active microwave sensing; sensor electronics; sensor optics; coolers and cryogenics; and high temperature superconductivity.

  2. SSTAC/ARTS review of the draft Integrated Technology Plan (ITP). Volume 8: Aerothermodynamics Automation and Robotics (A/R) systems sensors, high-temperature superconductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Viewgraphs of briefings presented at the SSTAC/ARTS review of the draft Integrated Technology Plan (ITP) on aerothermodynamics, automation and robotics systems, sensors, and high-temperature superconductivity are included. Topics covered include: aerothermodynamics; aerobraking; aeroassist flight experiment; entry technology for probes and penetrators; automation and robotics; artificial intelligence; NASA telerobotics program; planetary rover program; science sensor technology; direct detector; submillimeter sensors; laser sensors; passive microwave sensing; active microwave sensing; sensor electronics; sensor optics; coolers and cryogenics; and high temperature superconductivity.

  3. Magician Simulator: A Realistic Simulator for Heterogenous Teams of Autonomous Robots. MAGIC 2010 Challenge

    DTIC Science & Technology

    2011-02-07

    Sensor UGVs (SUGV) or Disruptor UGVs, depending on their payload. The SUGVs included vision, GPS/IMU, and LIDAR systems for identifying and tracking...employed by all the MAGICian research groups. Objects of interest were tracked using standard LIDAR and Computer Vision template-based feature...tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous Locali- zation and Mapping ( SLAM ). Our system contains

  4. Multiple-Agent Air/Ground Autonomous Exploration Systems

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.

    2007-01-01

    Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.

  5. A review of physical security robotics at Sandia National Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roerig, S.C.

    1990-01-01

    As an outgrowth of research into physical security technologies, Sandia is investigating the role of robotics in security systems. Robotics may allow more effective utilization of guard forces, especially in scenarios where personnel would be exposed to harmful environments. Robots can provide intrusion detection and assessment functions for failed sensors or transient assets, can test existing fixed site sensors, and can gather additional intelligence and dispense delaying elements. The Robotic Security Vehicle (RSV) program for DOE/OSS is developing a fieldable prototype for an exterior physical security robot based upon a commercial four wheel drive vehicle. The RSV will be capablemore » of driving itself, being driven remotely, or being driven by an onboard operator around a site and will utilize its sensors to alert an operator to unusual conditions. The Remote Security Station (RSS) program for the Defense Nuclear Agency is developing a proof-of-principle robotic system which will be used to evaluate the role, and associated cost, of robotic technologies in exterior security systems. The RSS consists of an independent sensor pod, a mobile sensor platform and a control and display console. Sensor data fusion is used to optimize the system's intrusion detection performance. These programs are complementary, the RSV concentrates on developing autonomous mobility, while the RSS thrust is on mobile sensor employment. 3 figs.« less

  6. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  7. Self-organized adaptation of a simple neural circuit enables complex robot behaviour

    NASA Astrophysics Data System (ADS)

    Steingrube, Silke; Timme, Marc; Wörgötter, Florentin; Manoonpong, Poramate

    2010-03-01

    Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns (for example, orienting, taxis, self-protection and various gaits) and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom.

  8. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  9. Controlling the autonomy of a reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David

    2004-09-01

    In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.

  10. Determining robot actions for tasks requiring sensor interaction

    NASA Technical Reports Server (NTRS)

    Budenske, John; Gini, Maria

    1989-01-01

    The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.

  11. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications †

    PubMed Central

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Muroyama, Masanori

    2017-01-01

    Robot tactile sensation can enhance human–robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as “sensor platform LSI”) as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated. PMID:29061954

  12. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications.

    PubMed

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Nonomura, Yutaka; Muroyama, Masanori

    2017-08-28

    Robot tactile sensation can enhance human-robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as "sensor platform LSI") as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated.

  13. Infrared-Proximity-Sensor Modules For Robot

    NASA Technical Reports Server (NTRS)

    Parton, William; Wegerif, Daniel; Rosinski, Douglas

    1995-01-01

    Collision-avoidance system for articulated robot manipulators uses infrared proximity sensors grouped together in array of sensor modules. Sensor modules, called "sensorCells," distributed processing board-level products for acquiring data from proximity-sensors strategically mounted on robot manipulators. Each sensorCell self-contained and consists of multiple sensing elements, discrete electronics, microcontroller and communications components. Modules connected to central control computer by redundant serial digital communication subsystem including both serial and a multi-drop bus. Detects objects made of various materials at distance of up to 50 cm. For some materials, such as thermal protection system tiles, detection range reduced to approximately 20 cm.

  14. Semi autonomous mine detection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas Few; Roelof Versteeg; Herman Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIKmore » was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.« less

  15. Compact Tactile Sensors for Robot Fingers

    NASA Technical Reports Server (NTRS)

    Martin, Toby B.; Lussy, David; Gaudiano, Frank; Hulse, Aaron; Diftler, Myron A.; Rodriguez, Dagoberto; Bielski, Paul; Butzer, Melisa

    2004-01-01

    Compact transducer arrays that measure spatial distributions of force or pressure have been demonstrated as prototypes of tactile sensors to be mounted on fingers and palms of dexterous robot hands. The pressure- or force-distribution feedback provided by these sensors is essential for the further development and implementation of robot-control capabilities for humanlike grasping and manipulation.

  16. The Design of Artificial Intelligence Robot Based on Fuzzy Logic Controller Algorithm

    NASA Astrophysics Data System (ADS)

    Zuhrie, M. S.; Munoto; Hariadi, E.; Muslim, S.

    2018-04-01

    Artificial Intelligence Robot is a wheeled robot driven by a DC motor that moves along the wall using an ultrasonic sensor as a detector of obstacles. This study uses ultrasonic sensors HC-SR04 to measure the distance between the robot with the wall based ultrasonic wave. This robot uses Fuzzy Logic Controller to adjust the speed of DC motor. When the ultrasonic sensor detects a certain distance, sensor data is processed on ATmega8 then the data goes to ATmega16. From ATmega16, sensor data is calculated based on Fuzzy rules to drive DC motor speed. The program used to adjust the speed of a DC motor is CVAVR program (Code Vision AVR). The readable distance of ultrasonic sensor is 3 cm to 250 cm with response time 0.5 s. Testing of robots on walls with a setpoint value of 9 cm to 10 cm produce an average error value of -12% on the wall of L, -8% on T walls, -8% on U wall, and -1% in square wall.

  17. ARK: Autonomous mobile robot in an industrial environment

    NASA Technical Reports Server (NTRS)

    Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.

    1994-01-01

    This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.

  18. System Wide Joint Position Sensor Fault Tolerance in Robot Systems Using Cartesian Accelerometers

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    Joint position sensors are necessary for most robot control systems. A single position sensor failure in a normal robot system can greatly degrade performance. This paper presents a method to obtain position information from Cartesian accelerometers without integration. Depending on the number and location of the accelerometers. the proposed system can tolerate the loss of multiple position sensors. A solution technique suitable for real-time implementation is presented. Simulations were conducted using 5 triaxial accelerometers to recover from the loss of up to 4 joint position sensors on a 7 degree of freedom robot moving in general three dimensional space. The simulations show good estimation performance using non-ideal accelerometer measurements.

  19. FPGA-based fused smart sensor for dynamic and vibration parameter extraction in industrial robot links.

    PubMed

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).

  20. FPGA-Based Fused Smart Sensor for Dynamic and Vibration Parameter Extraction in Industrial Robot Links

    PubMed Central

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345

  1. A Force-Sensing System on Legs for Biomimetic Hexapod Robots Interacting with Unstructured Terrain

    PubMed Central

    Wu, Rui; Li, Changle; Zang, Xizhe; Zhang, Xuehe; Jin, Hongzhe; Zhao, Jie

    2017-01-01

    The tiger beetle can maintain its stability by controlling the interaction force between its legs and an unstructured terrain while it runs. The biomimetic hexapod robot mimics a tiger beetle, and a comprehensive force sensing system combined with certain algorithms can provide force information that can help the robot understand the unstructured terrain that it interacts with. This study introduces a complicated leg force sensing system for a hexapod robot that is the same for all six legs. First, the layout and configuration of sensing system are designed according to the structure and sizes of legs. Second, the joint toque sensors, 3-DOF foot-end force sensor and force information processing module are designed, and the force sensor performance parameters are tested by simulations and experiments. Moreover, a force sensing system is implemented within the robot control architecture. Finally, the experimental evaluation of the leg force sensor system on the hexapod robot is discussed and the performance of the leg force sensor system is verified. PMID:28654003

  2. Extensibility in local sensor based planning for hyper-redundant manipulators (robot snakes)

    NASA Technical Reports Server (NTRS)

    Choset, Howie; Burdick, Joel

    1994-01-01

    Partial Shape Modification (PSM) is a local sensor feedback method used for hyper-redundant robot manipulators, in which the redundancy is very large or infinite such as that of a robot snake. This aspect of redundancy enables local obstacle avoidance and end-effector placement in real time. Due to the large number of joints or actuators in a hyper-redundant manipulator, small displacement errors of such easily accumulate to large errors in the position of the tip relative to the base. The accuracy could be improved by a local sensor based planning method in which sensors are distributed along the length of the hyper-redundant robot. This paper extends the local sensor based planning strategy beyond the limitations of the fixed length of such a manipulator when its joint limits are met. This is achieved with an algorithm where the length of the deforming part of the robot is variable. Thus , the robot's local avoidance of obstacles is improved through the enhancement of its extensibility.

  3. Grounding language in action and perception: From cognitive agents to humanoid robots

    NASA Astrophysics Data System (ADS)

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition.

  4. Tier-scalable reconnaissance: the challenge of sensor optimization, sensor deployment, sensor fusion, and sensor interoperability

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; George, Thomas; Tarbell, Mark A.

    2007-04-01

    Robotic reconnaissance operations are called for in extreme environments, not only those such as space, including planetary atmospheres, surfaces, and subsurfaces, but also in potentially hazardous or inaccessible operational areas on Earth, such as mine fields, battlefield environments, enemy occupied territories, terrorist infiltrated environments, or areas that have been exposed to biochemical agents or radiation. Real time reconnaissance enables the identification and characterization of transient events. A fundamentally new mission concept for tier-scalable reconnaissance of operational areas, originated by Fink et al., is aimed at replacing the engineering and safety constrained mission designs of the past. The tier-scalable paradigm integrates multi-tier (orbit atmosphere surface/subsurface) and multi-agent (satellite UAV/blimp surface/subsurface sensing platforms) hierarchical mission architectures, introducing not only mission redundancy and safety, but also enabling and optimizing intelligent, less constrained, and distributed reconnaissance in real time. Given the mass, size, and power constraints faced by such a multi-platform approach, this is an ideal application scenario for a diverse set of MEMS sensors. To support such mission architectures, a high degree of operational autonomy is required. Essential elements of such operational autonomy are: (1) automatic mapping of an operational area from different vantage points (including vehicle health monitoring); (2) automatic feature extraction and target/region-of-interest identification within the mapped operational area; and (3) automatic target prioritization for close-up examination. These requirements imply the optimal deployment of MEMS sensors and sensor platforms, sensor fusion, and sensor interoperability.

  5. The mechanical design of a humanoid robot with flexible skin sensor for use in psychiatric therapy

    NASA Astrophysics Data System (ADS)

    Burns, Alec; Tadesse, Yonas

    2014-03-01

    In this paper, a humanoid robot is presented for ultimate use in the rehabilitation of children with mental disorders, such as autism. Creating affordable and efficient humanoids could assist the therapy in psychiatric disability by offering multimodal communication between the humanoid and humans. Yet, the humanoid development needs a seamless integration of artificial muscles, sensors, controllers and structures. We have designed a human-like robot that has 15 DOF, 580 mm tall and 925 mm arm span using a rapid prototyping system. The robot has a human-like appearance and movement. Flexible sensors around the arm and hands for safe human-robot interactions, and a two-wheel mobile platform for maneuverability are incorporated in the design. The robot has facial features for illustrating human-friendly behavior. The mechanical design of the robot and the characterization of the flexible sensors are presented. Comprehensive study on the upper body design, mobile base, actuators selection, electronics, and performance evaluation are included in this paper.

  6. Modeling Analysis of a Six-axis Force/Tactile Sensor for Robot Fingers and a Method for Decreasing Error

    NASA Astrophysics Data System (ADS)

    Luo, Minghua; Shimizu, Etsuro; Zhang, Feifei; Ito, Masanori

    This paper describes a six-axis force/tactile sensor for robot fingers. A mathematical model of this sensor is proposed. By this model, the grasping force and its moments, and touching position of robot finger for holding an object can be calculated. A new sensor is fabricated based on this model, where the elastic sensing unit of the sensor is made of a brazen plate. A new compensating method for decreasing error is proposed. Furthermore, the performance of this sensor is examined. The test results present approximate relationship between theoretical input and output of the sensor. It is obvious that the performance of the new sensor is better than the sensor with no compensation.

  7. Hybrid position and orientation tracking for a passive rehabilitation table-top robot.

    PubMed

    Wojewoda, K K; Culmer, P R; Gallagher, J F; Jackson, A E; Levesley, M C

    2017-07-01

    This paper presents a real time hybrid 2D position and orientation tracking system developed for an upper limb rehabilitation robot. Designed to work on a table-top, the robot is to enable home-based upper-limb rehabilitative exercise for stroke patients. Estimates of the robot's position are computed by fusing data from two tracking systems, each utilizing a different sensor type: laser optical sensors and a webcam. Two laser optical sensors are mounted on the underside of the robot and track the relative motion of the robot with respect to the surface on which it is placed. The webcam is positioned directly above the workspace, mounted on a fixed stand, and tracks the robot's position with respect to a fixed coordinate system. The optical sensors sample the position data at a higher frequency than the webcam, and a position and orientation fusion scheme is proposed to fuse the data from the two tracking systems. The proposed fusion scheme is validated through an experimental set-up whereby the rehabilitation robot is moved by a humanoid robotic arm replicating previously recorded movements of a stroke patient. The results prove that the presented hybrid position tracking system can track the position and orientation with greater accuracy than the webcam or optical sensors alone. The results also confirm that the developed system is capable of tracking recovery trends during rehabilitation therapy.

  8. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    PubMed Central

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930

  9. Estimation of visual maps with a robot network equipped with vision sensors.

    PubMed

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  10. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  11. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  12. Fused smart sensor network for multi-axis forward kinematics estimation in industrial robots.

    PubMed

    Rodriguez-Donate, Carlos; Osornio-Rios, Roque Alfredo; Rivera-Guillen, Jesus Rooney; Romero-Troncoso, Rene de Jesus

    2011-01-01

    Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of high-computational load, which avoids the online measurement of the joint's angular position and the online forward kinematics estimation. The contribution of this work is to propose a fused smart sensor network to estimate the forward kinematics of an industrial robot. The developed smart processor uses Kalman filters to filter and to fuse the information of the sensor network. Two primary sensors are used: an optical encoder, and a 3-axis accelerometer. In order to obtain the position and orientation of each joint online a field-programmable gate array (FPGA) is used in the hardware implementation taking advantage of the parallel computation capabilities and reconfigurability of this device. With the aim of evaluating the smart sensor network performance, three real-operation-oriented paths are executed and monitored in a 6-degree of freedom robot.

  13. GelSight: High-Resolution Robot Tactile Sensors for Estimating Geometry and Force

    PubMed Central

    Yuan, Wenzhen; Dong, Siyuan; Adelson, Edward H.

    2017-01-01

    Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation. PMID:29186053

  14. A plant-inspired robot with soft differential bending capabilities.

    PubMed

    Sadeghi, A; Mondini, A; Del Dottore, E; Mattoli, V; Beccai, L; Taccola, S; Lucarotti, C; Totaro, M; Mazzolai, B

    2016-12-20

    We present the design and development of a plant-inspired robot, named Plantoid, with sensorized robotic roots. Natural roots have a multi-sensing capability and show a soft bending behaviour to follow or escape from various environmental parameters (i.e., tropisms). Analogously, we implement soft bending capabilities in our robotic roots by designing and integrating soft spring-based actuation (SSBA) systems using helical springs to transmit the motor power in a compliant manner. Each robotic tip integrates four different sensors, including customised flexible touch and innovative humidity sensors together with commercial gravity and temperature sensors. We show how the embedded sensing capabilities together with a root-inspired control algorithm lead to the implementation of tropic behaviours. Future applications for such plant-inspired technologies include soil monitoring and exploration, useful for agriculture and environmental fields.

  15. Dynamic electronic institutions in agent oriented cloud robotic systems.

    PubMed

    Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice

    2015-01-01

    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.

  16. 3D Printed Wearable Sensors with Liquid Metals for the Pose Detection of Snakelike Soft Robots.

    PubMed

    Zhou, Luyu; Gao, Qing; Zhan, Jun-Fu; Xie, Chao-Qi; Fu, Jianzhong; He, Yong

    2018-06-18

    Liquid metal-based flexible sensors, which utilize advanced liquid conductive material to serve as sensitive element, is emerging as a promising solution to measure large deformations. Nowadays, one of the biggest challenges for precise control of soft robots is the detection of their real time positions. Existing fabrication methods are unable to fabricate flexible sensors that match the shape of soft robots. In this report, we firstly described a novel 3D printed multi-function inductance flexible and stretchable sensor with liquid metals (LMs), which is capable of measuring both axial tension and curvature. This sensor is fabricated with a developed coaxial liquid metal 3D printer by co-printing of silicone rubber and LMs. Due to the solenoid shape, this sensor can be easily installed on snakelike soft robots and can accurately distinguish different degrees of tensile and bending deformation. We determined the structural parameters of the sensor and proved its excellent stability and reliability. As a demonstration, we used this sensor to measure the curvature of a finger and feedback the position of endoscope, a typical snakelike structure. Because of its bending deformation form consistent with the actual working status of the soft robot and unique shape, this sensor has better practical application prospects in the pose detection.

  17. Localization of Mobile Robots Using Odometry and an External Vision Sensor

    PubMed Central

    Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina

    2010-01-01

    This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318

  18. Localization of mobile robots using odometry and an external vision sensor.

    PubMed

    Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina

    2010-01-01

    This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.

  19. Grounding language in action and perception: from cognitive agents to humanoid robots.

    PubMed

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition. Copyright 2010 Elsevier B.V. All rights reserved.

  20. Application of Kalman filters to robot calibration

    NASA Technical Reports Server (NTRS)

    Whitney, D. E.; Junkel, E. F.

    1983-01-01

    This report explores new uses of Kalman filter theory in manufacturing systems (robots in particular). The Kalman filter allows the robot to read its sensors plus external sensors and learn from its experience. In effect, the robot is given primitive intelligence. The study, which is applicable to any type of powered kinematic linkage, focuses on the calibration of a manipulator.

  1. Application of ultrasonic sensor for measuring distances in robotics

    NASA Astrophysics Data System (ADS)

    Zhmud, V. A.; Kondratiev, N. O.; Kuznetsov, K. A.; Trubin, V. G.; Dimitrov, L. V.

    2018-05-01

    Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.

  2. Consensus-based distributed estimation in multi-agent systems with time delay

    NASA Astrophysics Data System (ADS)

    Abdelmawgoud, Ahmed

    During the last years, research in the field of cooperative control of swarm of robots, especially Unmanned Aerial Vehicles (UAV); have been improved due to the increase of UAV applications. The ability to track targets using UAVs has a wide range of applications not only civilian but also military as well. For civilian applications, UAVs can perform tasks including, but not limited to: map an unknown area, weather forecasting, land survey, and search and rescue missions. On the other hand, for military personnel, UAV can track and locate a variety of objects, including the movement of enemy vehicles. Consensus problems arise in a number of applications including coordination of UAVs, information processing in wireless sensor networks, and distributed multi-agent optimization. We consider a widely studied consensus algorithms for processing sensed data by different sensors in wireless sensor networks of dynamic agents. Every agent involved in the network forms a weighted average of its own estimated value of some state with the values received from its neighboring agents. We introduced a novelty of consensus-based distributed estimation algorithms. We propose a new algorithm to reach a consensus given time delay constraints. The proposed algorithm performance was observed in a scenario where a swarm of UAVs measuring the location of a ground maneuvering target. We assume that each UAV computes its state prediction and shares it with its neighbors only. However, the shared information applied to different agents with variant time delays. The entire group of UAVs must reach a consensus on target state. Different scenarios were also simulated to examine the effectiveness and performance in terms of overall estimation error, disagreement between delayed and non-delayed agents, and time to reach a consensus for each parameter contributing on the proposed algorithm.

  3. Multi-sensor electrometer

    NASA Technical Reports Server (NTRS)

    Gompf, Raymond (Inventor); Buehler, Martin C. (Inventor)

    2003-01-01

    An array of triboelectric sensors is used for testing the electrostatic properties of a remote environment. The sensors may be mounted in the heel of a robot arm scoop. To determine the triboelectric properties of a planet surface, the robot arm scoop may be rubbed on the soil of the planet and the triboelectrically developed charge measured. By having an array of sensors, different insulating materials may be measured simultaneously. The insulating materials may be selected so their triboelectric properties cover a desired range. By mounting the sensor on a robot arm scoop, the measurements can be obtained during an unmanned mission.

  4. Proposed Methodology for Application of Human-like gradual Multi-Agent Q-Learning (HuMAQ) for Multi-robot Exploration

    NASA Astrophysics Data System (ADS)

    Narayan Ray, Dip; Majumder, Somajyoti

    2014-07-01

    Several attempts have been made by the researchers around the world to develop a number of autonomous exploration techniques for robots. But it has been always an important issue for developing the algorithm for unstructured and unknown environments. Human-like gradual Multi-agent Q-leaming (HuMAQ) is a technique developed for autonomous robotic exploration in unknown (and even unimaginable) environments. It has been successfully implemented in multi-agent single robotic system. HuMAQ uses the concept of Subsumption architecture, a well-known Behaviour-based architecture for prioritizing the agents of the multi-agent system and executes only the most common action out of all the different actions recommended by different agents. Instead of using new state-action table (Q-table) each time, HuMAQ uses the immediate past table for efficient and faster exploration. The proof of learning has also been established both theoretically and practically. HuMAQ has the potential to be used in different and difficult situations as well as applications. The same architecture has been modified to use for multi-robot exploration in an environment. Apart from all other existing agents used in the single robotic system, agents for inter-robot communication and coordination/ co-operation with the other similar robots have been introduced in the present research. Current work uses a series of indigenously developed identical autonomous robotic systems, communicating with each other through ZigBee protocol.

  5. Multi-Axis Force/Torque Sensor Based on Simply-Supported Beam and Optoelectronics.

    PubMed

    Noh, Yohan; Bimbo, Joao; Sareh, Sina; Wurdemann, Helge; Fraś, Jan; Chathuranga, Damith Suresh; Liu, Hongbin; Housden, James; Althoefer, Kaspar; Rhode, Kawal

    2016-11-17

    This paper presents a multi-axis force/torque sensor based on simply-supported beam and optoelectronic technology. The sensor's main advantages are: (1) Low power consumption; (2) low-level noise in comparison with conventional methods of force sensing (e.g., using strain gauges); (3) the ability to be embedded into different mechanical structures; (4) miniaturisation; (5) simple manufacture and customisation to fit a wide-range of robot systems; and (6) low-cost fabrication and assembly of sensor structure. For these reasons, the proposed multi-axis force/torque sensor can be used in a wide range of application areas including medical robotics, manufacturing, and areas involving human-robot interaction. This paper shows the application of our concept of a force/torque sensor to flexible continuum manipulators: A cylindrical MIS (Minimally Invasive Surgery) robot, and includes its design, fabrication, and evaluation tests.

  6. Object recognition for autonomous robot utilizing distributed knowledge database

    NASA Astrophysics Data System (ADS)

    Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji

    2003-10-01

    In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.

  7. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction.

    PubMed

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

  8. Autonomous Shepherding Behaviors of Multiple Target Steering Robots.

    PubMed

    Lee, Wonki; Kim, DaeEun

    2017-11-25

    This paper presents a distributed coordination methodology for multi-robot systems, based on nearest-neighbor interactions. Among many interesting tasks that may be performed using swarm robots, we propose a biologically-inspired control law for a shepherding task, whereby a group of external agents drives another group of agents to a desired location. First, we generated sheep-like robots that act like a flock. We assume that each agent is capable of measuring the relative location and velocity to each of its neighbors within a limited sensing area. Then, we designed a control strategy for shepherd-like robots that have information regarding where to go and a steering ability to control the flock, according to the robots' position relative to the flock. We define several independent behavior rules; each agent calculates to what extent it will move by summarizing each rule. The flocking sheep agents detect the steering agents and try to avoid them; this tendency leads to movement of the flock. Each steering agent only needs to focus on guiding the nearest flocking agent to the desired location. Without centralized coordination, multiple steering agents produce an arc formation to control the flock effectively. In addition, we propose a new rule for collecting behavior, whereby a scattered flock or multiple flocks are consolidated. From simulation results with multiple robots, we show that each robot performs actions for the shepherding behavior, and only a few steering agents are needed to control the whole flock. The results are displayed in maps that trace the paths of the flock and steering robots. Performance is evaluated via time cost and path accuracy to demonstrate the effectiveness of this approach.

  9. Robotic tool positioning process using a multi-line off-axis laser triangulation sensor

    NASA Astrophysics Data System (ADS)

    Pinto, T. C.; Matos, G.

    2018-03-01

    Proper positioning of a friction stir welding head for pin insertion, driven by a closed chain robot, is important to ensure quality repair of cracks. A multi-line off-axis laser triangulation sensor was designed to be integrated to the robot, allowing relative measurements of the surface to be repaired. This work describes the sensor characteristics, its evaluation and the measurement process for tool positioning to a surface point of interest. The developed process uses a point of interest image and a measured point cloud to define the translation and rotation for tool positioning. Sensor evaluation and tests are described. Keywords: laser triangulation, 3D measurement, tool positioning, robotics.

  10. Touch Sensor for Robots

    NASA Technical Reports Server (NTRS)

    Primus, H. C.

    1986-01-01

    Touch sensor for robot hands provides information about shape of grasped object and force exerted by gripper on object. Pins projecting from sensor create electrical signals when pressed. When grasped object depresses pin, it contacts electrode under it, connecting electrode to common electrode. Sensor indicates where, and how firmly, gripper has touched object.

  11. Developmental and Evolutionary Lexicon Acquisition in Cognitive Agents/Robots with Grounding Principle: A Short Review.

    PubMed

    Rasheed, Nadia; Amin, Shamsudin H M

    2016-01-01

    Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue.

  12. Developmental and Evolutionary Lexicon Acquisition in Cognitive Agents/Robots with Grounding Principle: A Short Review

    PubMed Central

    Rasheed, Nadia; Amin, Shamsudin H. M.

    2016-01-01

    Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue. PMID:27069470

  13. Time response for sensor sensed to actuator response for mobile robotic system

    NASA Astrophysics Data System (ADS)

    Amir, N. S.; Shafie, A. A.

    2017-11-01

    Time and performance of a mobile robot are very important in completing the tasks given to achieve its ultimate goal. Tasks may need to be done within a time constraint to ensure smooth operation of a mobile robot and can result in better performance. The main purpose of this research was to improve the performance of a mobile robot so that it can complete the tasks given within time constraint. The problem that is needed to be solved is to minimize the time interval between sensor detection and actuator response. The research objective is to analyse the real time operating system performance of sensors and actuators on one microcontroller and on two microcontroller for a mobile robot. The task for a mobile robot for this research is line following with an obstacle avoidance. Three runs will be carried out for the task and the time between the sensors senses to the actuator responses were recorded. Overall, the results show that two microcontroller system have better response time compared to the one microcontroller system. For this research, the average difference of response time is very important to improve the internal performance between the occurrence of a task, sensors detection, decision making and actuator response of a mobile robot. This research helped to develop a mobile robot with a better performance and can complete task within the time constraint.

  14. A Raman spectroscopy bio-sensor for tissue discrimination in surgical robotics.

    PubMed

    Ashok, Praveen C; Giardini, Mario E; Dholakia, Kishan; Sibbett, Wilson

    2014-01-01

    We report the development of a fiber-based Raman sensor to be used in tumour margin identification during endoluminal robotic surgery. Although this is a generic platform, the sensor we describe was adapted for the ARAKNES (Array of Robots Augmenting the KiNematics of Endoluminal Surgery) robotic platform. On such a platform, the Raman sensor is intended to identify ambiguous tissue margins during robot-assisted surgeries. To maintain sterility of the probe during surgical intervention, a disposable sleeve was specially designed. A straightforward user-compatible interface was implemented where a supervised multivariate classification algorithm was used to classify different tissue types based on specific Raman fingerprints so that it could be used without prior knowledge of spectroscopic data analysis. The protocol avoids inter-patient variability in data and the sensor system is not restricted for use in the classification of a particular tissue type. Representative tissue classification assessments were performed using this system on excised tissue. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Multi-Axis Force Sensor for Human-Robot Interaction Sensing in a Rehabilitation Robotic Device.

    PubMed

    Grosu, Victor; Grosu, Svetlana; Vanderborght, Bram; Lefeber, Dirk; Rodriguez-Guerrero, Carlos

    2017-06-05

    Human-robot interaction sensing is a compulsory feature in modern robotic systems where direct contact or close collaboration is desired. Rehabilitation and assistive robotics are fields where interaction forces are required for both safety and increased control performance of the device with a more comfortable experience for the user. In order to provide an efficient interaction feedback between the user and rehabilitation device, high performance sensing units are demanded. This work introduces a novel design of a multi-axis force sensor dedicated for measuring pelvis interaction forces in a rehabilitation exoskeleton device. The sensor is conceived such that it has different sensitivity characteristics for the three axes of interest having also movable parts in order to allow free rotations and limit crosstalk errors. Integrated sensor electronics make it easy to acquire and process data for a real-time distributed system architecture. Two of the developed sensors are integrated and tested in a complex gait rehabilitation device for safe and compliant control.

  16. Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies

    DTIC Science & Technology

    2006-07-01

    and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility

  17. Fused Smart Sensor Network for Multi-Axis Forward Kinematics Estimation in Industrial Robots

    PubMed Central

    Rodriguez-Donate, Carlos; Osornio-Rios, Roque Alfredo; Rivera-Guillen, Jesus Rooney; de Jesus Romero-Troncoso, Rene

    2011-01-01

    Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of high-computational load, which avoids the online measurement of the joint’s angular position and the online forward kinematics estimation. The contribution of this work is to propose a fused smart sensor network to estimate the forward kinematics of an industrial robot. The developed smart processor uses Kalman filters to filter and to fuse the information of the sensor network. Two primary sensors are used: an optical encoder, and a 3-axis accelerometer. In order to obtain the position and orientation of each joint online a field-programmable gate array (FPGA) is used in the hardware implementation taking advantage of the parallel computation capabilities and reconfigurability of this device. With the aim of evaluating the smart sensor network performance, three real-operation-oriented paths are executed and monitored in a 6-degree of freedom robot. PMID:22163850

  18. Peg-in-Hole Assembly Based on Two-phase Scheme and F/T Sensor for Dual-arm Robot

    PubMed Central

    Zhang, Xianmin; Zheng, Yanglong; Ota, Jun; Huang, Yanjiang

    2017-01-01

    This paper focuses on peg-in-hole assembly based on a two-phase scheme and force/torque sensor (F/T sensor) for a compliant dual-arm robot, the Baxter robot. The coordinated operations of human beings in assembly applications are applied to the behaviors of the robot. A two-phase assembly scheme is proposed to overcome the inaccurate positioning of the compliant dual-arm robot. The position and orientation of assembly pieces are adjusted respectively in an active compliant manner according to the forces and torques derived by a six degrees-of-freedom (6-DOF) F/T sensor. Experiments are conducted to verify the effectiveness and efficiency of the proposed assembly scheme. The performances of the dual-arm robot are consistent with those of human beings in the peg-in-hole assembly process. The peg and hole with 0.5 mm clearance for round pieces and square pieces can be assembled successfully. PMID:28862691

  19. Vision Guided Intelligent Robot Design And Experiments

    NASA Astrophysics Data System (ADS)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  20. Peg-in-Hole Assembly Based on Two-phase Scheme and F/T Sensor for Dual-arm Robot.

    PubMed

    Zhang, Xianmin; Zheng, Yanglong; Ota, Jun; Huang, Yanjiang

    2017-09-01

    This paper focuses on peg-in-hole assembly based on a two-phase scheme and force/torque sensor (F/T sensor) for a compliant dual-arm robot, the Baxter robot. The coordinated operations of human beings in assembly applications are applied to the behaviors of the robot. A two-phase assembly scheme is proposed to overcome the inaccurate positioning of the compliant dual-arm robot. The position and orientation of assembly pieces are adjusted respectively in an active compliant manner according to the forces and torques derived by a six degrees-of-freedom (6-DOF) F/T sensor. Experiments are conducted to verify the effectiveness and efficiency of the proposed assembly scheme. The performances of the dual-arm robot are consistent with those of human beings in the peg-in-hole assembly process. The peg and hole with 0.5 mm clearance for round pieces and square pieces can be assembled successfully.

  1. Maintaining Limited-Range Connectivity Among Second-Order Agents

    DTIC Science & Technology

    2016-07-07

    we consider ad-hoc networks of robotic agents with double integrator dynamics. For such networks, the connectivity maintenance problems are: (i) do...hoc networks of mobile autonomous agents. This loose ter- minology refers to groups of robotic agents with limited mobility and communica- tion...connectivity can be preserved. 3.1. Networks of robotic agents with second-order dynamics and the connectivity maintenance problem. We begin by

  2. A Novel Tactile Sensor with Electromagnetic Induction and Its Application on Stick-Slip Interaction Detection

    PubMed Central

    Liu, Yanjie; Han, Haijun; Liu, Tao; Yi, Jingang; Li, Qingguo; Inoue, Yoshio

    2016-01-01

    Real-time detection of contact states, such as stick-slip interaction between a robot and an object on its end effector, is crucial for the robot to grasp and manipulate the object steadily. This paper presents a novel tactile sensor based on electromagnetic induction and its application on stick-slip interaction. An equivalent cantilever-beam model of the tactile sensor was built and capable of constructing the relationship between the sensor output and the friction applied on the sensor. With the tactile sensor, a new method to detect stick-slip interaction on the contact surface between the object and the sensor is proposed based on the characteristics of friction change. Furthermore, a prototype was developed for a typical application, stable wafer transferring on a wafer transfer robot, by considering the spatial magnetic field distribution and the sensor size according to the requirements of wafer transfer. The experimental results validate the sensing mechanism of the tactile sensor and verify its feasibility of detecting stick-slip on the contact surface between the wafer and the sensor. The sensing mechanism also provides a new approach to detect the contact state on the soft-rigid surface in other robot-environment interaction systems. PMID:27023545

  3. Molecular robots with sensors and intelligence.

    PubMed

    Hagiya, Masami; Konagaya, Akihiko; Kobayashi, Satoshi; Saito, Hirohide; Murata, Satoshi

    2014-06-17

    CONSPECTUS: What we can call a molecular robot is a set of molecular devices such as sensors, logic gates, and actuators integrated into a consistent system. The molecular robot is supposed to react autonomously to its environment by receiving molecular signals and making decisions by molecular computation. Building such a system has long been a dream of scientists; however, despite extensive efforts, systems having all three functions (sensing, computation, and actuation) have not been realized yet. This Account introduces an ongoing research project that focuses on the development of molecular robotics funded by MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan). This 5 year project started in July 2012 and is titled "Development of Molecular Robots Equipped with Sensors and Intelligence". The major issues in the field of molecular robotics all correspond to a feedback (i.e., plan-do-see) cycle of a robotic system. More specifically, these issues are (1) developing molecular sensors capable of handling a wide array of signals, (2) developing amplification methods of signals to drive molecular computing devices, (3) accelerating molecular computing, (4) developing actuators that are controllable by molecular computers, and (5) providing bodies of molecular robots encapsulating the above molecular devices, which implement the conformational changes and locomotion of the robots. In this Account, the latest contributions to the project are reported. There are four research teams in the project that specialize on sensing, intelligence, amoeba-like actuation, and slime-like actuation, respectively. The molecular sensor team is focusing on the development of molecular sensors that can handle a variety of signals. This team is also investigating methods to amplify signals from the molecular sensors. The molecular intelligence team is developing molecular computers and is currently focusing on a new photochemical technology for accelerating DNA-based computations. They also introduce novel computational models behind various kinds of molecular computers necessary for designing such computers. The amoeba robot team aims at constructing amoeba-like robots. The team is trying to incorporate motor proteins, including kinesin and microtubules (MTs), for use as actuators implemented in a liposomal compartment as a robot body. They are also developing a methodology to link DNA-based computation and molecular motor control. The slime robot team focuses on the development of slime-like robots. The team is evaluating various gels, including DNA gel and BZ gel, for use as actuators, as well as the body material to disperse various molecular devices in it. They also try to control the gel actuators by DNA signals coming from molecular computers.

  4. A flexible slip sensor using triboelectric nanogenerator approach

    NASA Astrophysics Data System (ADS)

    Wang, Xudong; Liang, Jiaming; Xiao, Yuxiang; Wu, Yichuan; Deng, Yang; Wang, Xiaohao; Zhang, Min

    2018-03-01

    With the rapid development of robotic technology, tactile sensors for robots have gained great attention from academic and industry researchers. Tactile sensors for slip detection are essential for human-like steady control in dexterous robot hand. In this paper, we propose and demonstrate a flexible slip sensor based on triboelectric nanogenerator with a seesaw structure. The sensor is composed of two porous PDMS layers separated by an inverted trapezoid structure with a height of 500 μm. In order to customize the sensitivity of the sensor, porous PDMS was fabricated by mixing PDMS with deionized water thoroughly and then removing water with heat. Laser-induced porous graphene and aluminium are served as the pair of contact materials. To detect slip from different directions, two sets of the electrode pair were used. Experimental results show a distinct difference between static state and the moment when a slip happens was detected. In addition, the output voltage of the sensors increased as the increase of slip velocity from 0.25 mm/s to 2.5 mm/s. The flexible slip sensor proposed here shows the potential applications in smart robotics and prosthesis.

  5. Neural network-based landmark detection for mobile robot

    NASA Astrophysics Data System (ADS)

    Sekiguchi, Minoru; Okada, Hiroyuki; Watanabe, Nobuo

    1996-03-01

    The mobile robot can essentially have only the relative position data for the real world. However, there are many cases that the robot has to know where it is located. In those cases, the useful method is to detect landmarks in the real world and adjust its position using detected landmarks. In this point of view, it is essential to develop a mobile robot that can accomplish the path plan successfully using natural or artificial landmarks. However, artificial landmarks are often difficult to construct and natural landmarks are very complicated to detect. In this paper, the method of acquiring landmarks by using the sensor data from the mobile robot necessary for planning the path is described. The landmark we discuss here is the natural one and is composed of the compression of sensor data from the robot. The sensor data is compressed and memorized by using five layered neural network that is called a sand glass model. The input and output data that neural network should learn is the sensor data of the robot that are exactly the same. Using the intermediate output data of the network, a compressed data is obtained, which expresses a landmark data. If the sensor data is ambiguous or enormous, it is easy to detect the landmark because the data is compressed and classified by the neural network. Using the backward three layers, the compressed landmark data is expanded to original data at some level. The studied neural network categorizes the detected sensor data to the known landmark.

  6. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    NASA Astrophysics Data System (ADS)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  7. A Cognitive Framework for Resource-Aware Sensor Net Organizations

    DTIC Science & Technology

    2008-07-01

    The AFRL/RI team had procured and were experimenting with several Sony AIBO R © “dog” platforms as low-cost mobility robots that could potentially...available. 14 a cycle, it keeps its radio turned on for that long to hear if any agent (including console and regional nodes) wants to communicate. If...08), Estoril, Portugal, May 2008. To appear. [10] Daniel D. Corkill and Victor R . Lesser. The use of meta-level control for coordination in a dis

  8. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  9. Multi-agent robotic systems and applications for satellite missions

    NASA Astrophysics Data System (ADS)

    Nunes, Miguel A.

    A revolution in the space sector is happening. It is expected that in the next decade there will be more satellites launched than in the previous sixty years of space exploration. Major challenges are associated with this growth of space assets such as the autonomy and management of large groups of satellites, in particular with small satellites. There are two main objectives for this work. First, a flexible and distributed software architecture is presented to expand the possibilities of spacecraft autonomy and in particular autonomous motion in attitude and position. The approach taken is based on the concept of distributed software agents, also referred to as multi-agent robotic system. Agents are defined as software programs that are social, reactive and proactive to autonomously maximize the chances of achieving the set goals. Part of the work is to demonstrate that a multi-agent robotic system is a feasible approach for different problems of autonomy such as satellite attitude determination and control and autonomous rendezvous and docking. The second main objective is to develop a method to optimize multi-satellite configurations in space, also known as satellite constellations. This automated method generates new optimal mega-constellations designs for Earth observations and fast revisit times on large ground areas. The optimal satellite constellation can be used by researchers as the baseline for new missions. The first contribution of this work is the development of a new multi-agent robotic system for distributing the attitude determination and control subsystem for HiakaSat. The multi-agent robotic system is implemented and tested on the satellite hardware-in-the-loop testbed that simulates a representative space environment. The results show that the newly proposed system for this particular case achieves an equivalent control performance when compared to the monolithic implementation. In terms on computational efficiency it is found that the multi-agent robotic system has a consistent lower CPU load of 0.29 +/- 0.03 compared to 0.35 +/- 0.04 for the monolithic implementation, a 17.1 % reduction. The second contribution of this work is the development of a multi-agent robotic system for the autonomous rendezvous and docking of multiple spacecraft. To compute the maneuvers guidance, navigation and control algorithms are implemented as part of the multi-agent robotic system. The navigation and control functions are implemented using existing algorithms, but one important contribution of this section is the introduction of a new six degrees of freedom guidance method which is part of the guidance, navigation and control architecture. This new method is an explicit solution to the guidance problem, and is particularly useful for real time guidance for attitude and position, as opposed to typical guidance methods which are based on numerical solutions, and therefore are computationally intensive. A simulation scenario is run for docking four CubeSats deployed radially from a launch vehicle. Considering fully actuated CubeSats, the simulations show docking maneuvers that are successfully completed within 25 minutes which is approximately 30% of a full orbital period in low earth orbit. The final section investigates the problem of optimization of satellite constellations for fast revisit time, and introduces a new method to generate different constellation configurations that are evaluated with a genetic algorithm. Two case studies are presented. The first is the optimization of a constellation for rapid coverage of the oceans of the globe in 24 hours or less. Results show that for an 80 km sensor swath width 50 satellites are required to cover the oceans with a 24 hour revisit time. The second constellation configuration study focuses on the optimization for the rapid coverage of the North Atlantic Tracks for air traffic monitoring in 3 hours or less. The results show that for a fixed swath width of 160 km and for a 3 hour revisit time 52 satellites are required.

  10. Integrated multi-sensor package (IMSP) for unmanned vehicle operations

    NASA Astrophysics Data System (ADS)

    Crow, Eddie C.; Reichard, Karl; Rogan, Chris; Callen, Jeff; Seifert, Elwood

    2007-10-01

    This paper describes recent efforts to develop integrated multi-sensor payloads for small robotic platforms for improved operator situational awareness and ultimately for greater robot autonomy. The focus is on enhancements to perception through integration of electro-optic, acoustic, and other sensors for navigation and inspection. The goals are to provide easier control and operation of the robot through fusion of multiple sensor outputs, to improve interoperability of the sensor payload package across multiple platforms through the use of open standards and architectures, and to reduce integration costs by embedded sensor data processing and fusion within the sensor payload package. The solutions investigated in this project to be discussed include: improved capture, processing and display of sensor data from multiple, non-commensurate sensors; an extensible architecture to support plug and play of integrated sensor packages; built-in health, power and system status monitoring using embedded diagnostics/prognostics; sensor payload integration into standard product forms for optimized size, weight and power; and the use of the open Joint Architecture for Unmanned Systems (JAUS)/ Society of Automotive Engineers (SAE) AS-4 interoperability standard. This project is in its first of three years. This paper will discuss the applicability of each of the solutions in terms of its projected impact to reducing operational time for the robot and teleoperator.

  11. Peer-to-peer model for the area coverage and cooperative control of mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Tan, Jindong; Xi, Ning

    2004-09-01

    This paper presents a novel model and distributed algorithms for the cooperation and redeployment of mobile sensor networks. A mobile sensor network composes of a collection of wireless connected mobile robots equipped with a variety of sensors. In such a sensor network, each mobile node has sensing, computation, communication, and locomotion capabilities. The locomotion ability enhances the autonomous deployment of the system. The system can be rapidly deployed to hostile environment, inaccessible terrains or disaster relief operations. The mobile sensor network is essentially a cooperative multiple robot system. This paper first presents a peer-to-peer model to define the relationship between neighboring communicating robots. Delaunay Triangulation and Voronoi diagrams are used to define the geometrical relationship between sensor nodes. This distributed model allows formal analysis for the fusion of spatio-temporal sensory information of the network. Based on the distributed model, this paper discusses a fault tolerant algorithm for autonomous self-deployment of the mobile robots. The algorithm considers the environment constraints, the presence of obstacles and the nonholonomic constraints of the robots. The distributed algorithm enables the system to reconfigure itself such that the area covered by the system can be enlarged. Simulation results have shown the effectiveness of the distributed model and deployment algorithms.

  12. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    NASA Astrophysics Data System (ADS)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  13. Autonomous Shepherding Behaviors of Multiple Target Steering Robots

    PubMed Central

    Lee, Wonki; Kim, DaeEun

    2017-01-01

    This paper presents a distributed coordination methodology for multi-robot systems, based on nearest-neighbor interactions. Among many interesting tasks that may be performed using swarm robots, we propose a biologically-inspired control law for a shepherding task, whereby a group of external agents drives another group of agents to a desired location. First, we generated sheep-like robots that act like a flock. We assume that each agent is capable of measuring the relative location and velocity to each of its neighbors within a limited sensing area. Then, we designed a control strategy for shepherd-like robots that have information regarding where to go and a steering ability to control the flock, according to the robots’ position relative to the flock. We define several independent behavior rules; each agent calculates to what extent it will move by summarizing each rule. The flocking sheep agents detect the steering agents and try to avoid them; this tendency leads to movement of the flock. Each steering agent only needs to focus on guiding the nearest flocking agent to the desired location. Without centralized coordination, multiple steering agents produce an arc formation to control the flock effectively. In addition, we propose a new rule for collecting behavior, whereby a scattered flock or multiple flocks are consolidated. From simulation results with multiple robots, we show that each robot performs actions for the shepherding behavior, and only a few steering agents are needed to control the whole flock. The results are displayed in maps that trace the paths of the flock and steering robots. Performance is evaluated via time cost and path accuracy to demonstrate the effectiveness of this approach. PMID:29186836

  14. The Resurrection of Malthus: space as the final escape from the law of diminishing returns

    NASA Astrophysics Data System (ADS)

    Sommers, J.; Beldavs, V.

    2017-09-01

    If there is a self-sustaining space economy, which is the goal of the International Lunar Decade, then it is a subject of economic analysis. The immediate challenge of space economics then is to conceptually demonstrate how a space economy could emerge and work where markets do not exist and few human agents may be involved, in fact where human agents may transact with either human agents or robotic agents or robotic agents may transact with other robotic agents.

  15. Sensor module design and forward and inverse kinematics analysis of 6-DOF sorting transferring robot

    NASA Astrophysics Data System (ADS)

    Zhou, Huiying; Lin, Jiajian; Liu, Lei; Tao, Meng

    2017-09-01

    To meet the demand of high strength express sorting, it is significant to design a robot with multiple degrees of freedom that can sort and transfer. This paper uses infrared sensor, color sensor and pressure sensor to receive external information, combine the plan of motion path in advance and the feedback information from the sensors, then write relevant program. In accordance with these, we can design a 6-DOF robot that can realize multi-angle seizing. In order to obtain characteristics of forward and inverse kinematics, this paper describes the coordinate directions and pose estimation by the D-H parameter method and closed solution. On the basis of the solution of forward and inverse kinematics, geometric parameters of links and link parameters are optimized in terms of application requirements. In this way, this robot can identify route, sort and transfer.

  16. Multi-Axis Force/Torque Sensor Based on Simply-Supported Beam and Optoelectronics

    PubMed Central

    Noh, Yohan; Bimbo, Joao; Sareh, Sina; Wurdemann, Helge; Fraś, Jan; Chathuranga, Damith Suresh; Liu, Hongbin; Housden, James; Althoefer, Kaspar; Rhode, Kawal

    2016-01-01

    This paper presents a multi-axis force/torque sensor based on simply-supported beam and optoelectronic technology. The sensor’s main advantages are: (1) Low power consumption; (2) low-level noise in comparison with conventional methods of force sensing (e.g., using strain gauges); (3) the ability to be embedded into different mechanical structures; (4) miniaturisation; (5) simple manufacture and customisation to fit a wide-range of robot systems; and (6) low-cost fabrication and assembly of sensor structure. For these reasons, the proposed multi-axis force/torque sensor can be used in a wide range of application areas including medical robotics, manufacturing, and areas involving human–robot interaction. This paper shows the application of our concept of a force/torque sensor to flexible continuum manipulators: A cylindrical MIS (Minimally Invasive Surgery) robot, and includes its design, fabrication, and evaluation tests. PMID:27869689

  17. Toward controlling perturbations in robotic sensor networks

    NASA Astrophysics Data System (ADS)

    Banerjee, Ashis G.; Majumder, Saikat R.

    2014-06-01

    Robotic sensor networks (RSNs), which consist of networks of sensors placed on mobile robots, are being increasingly used for environment monitoring applications. In particular, a lot of work has been done on simultaneous localization and mapping of the robots, and optimal sensor placement for environment state estimation1. The deployment of RSNs, however, remains challenging in harsh environments where the RSNs have to deal with significant perturbations in the forms of wind gusts, turbulent water flows, sand storms, or blizzards that disrupt inter-robot communication and individual robot stability. Hence, there is a need to be able to control such perturbations and bring the networks to desirable states with stable nodes (robots) and minimal operational performance (environment sensing). Recent work has demonstrated the feasibility of controlling the non-linear dynamics in other communication networks like emergency management systems and power grids by introducing compensatory perturbations to restore network stability and operation2. In this paper, we develop a computational framework to investigate the usefulness of this approach for RSNs in marine environments. Preliminary analysis shows promising performance and identifies bounds on the original perturbations within which it is possible to control the networks.

  18. Computer hardware and software for robotic control

    NASA Technical Reports Server (NTRS)

    Davis, Virgil Leon

    1987-01-01

    The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.

  19. Heuristic control of the Utah/MIT dextrous robot hand

    NASA Technical Reports Server (NTRS)

    Bass, Andrew H., Jr.

    1987-01-01

    Basic hand grips and sensor interactions that a dextrous robot hand will need as part of the operation of an EVA Retriever are analyzed. What is to be done with a dextrous robot hand is examined along with how such a complex machine might be controlled. It was assumed throughout that an anthropomorphic robot hand should perform tasks just as a human would; i.e., the most efficient approach to developing control strategies for the hand would be to model actual hand actions and do the same tasks in the same ways. Therefore, basic hand grips that human hands perform, as well as hand grip action were analyzed. It was also important to examine what is termed sensor fusion. This is the integration of various disparate sensor feedback paths. These feedback paths can be spatially and temporally separated, as well as, of different sensor types. Neural networks are seen as a means of integrating these varied sensor inputs and types. Basic heuristics of hand actions and grips were developed. These heuristics offer promise of control dextrous robot hands in a more natural and efficient way.

  20. A Robot Equipped with a High-Speed LSPR Gas Sensor Module for Collecting Spatial Odor Information from On-Ground Invisible Odor Sources.

    PubMed

    Yang, Zhongyuan; Sassa, Fumihiro; Hayashi, Kenshi

    2018-06-22

    Improving the efficiency of detecting the spatial distribution of gas information with a mobile robot is a great challenge that requires rapid sample collection, which is basically determined by the speed of operation of gas sensors. The present work developed a robot equipped with a high-speed gas sensor module based on localized surface plasmon resonance. The sensor module is designed to sample gases from an on-ground odor source, such as a footprint material or artificial odor marker, via a fine sampling tubing. The tip of the sampling tubing was placed close to the ground to reduce the sampling time and the effect of natural gas diffusion. On-ground ethanol odor sources were detected by the robot at high resolution (i.e., 2.5 cm when the robot moved at 10 cm/s), and the reading of gas information was demonstrated experimentally. This work may help in the development of environmental sensing robots, such as the development of odor source mapping and multirobot systems with pheromone tracing.

  1. Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot

    NASA Astrophysics Data System (ADS)

    Oh, Sung J.; Hall, Ernest L.

    1989-09-01

    The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.

  2. Robustness of a distributed neural network controller for locomotion in a hexapod robot

    NASA Technical Reports Server (NTRS)

    Chiel, Hillel J.; Beer, Randall D.; Quinn, Roger D.; Espenschied, Kenneth S.

    1992-01-01

    A distributed neural-network controller for locomotion, based on insect neurobiology, has been used to control a hexapod robot. How robust is this controller? Disabling any single sensor, effector, or central component did not prevent the robot from walking. Furthermore, statically stable gaits could be established using either sensor input or central connections. Thus, a complex interplay between central neural elements and sensor inputs is responsible for the robustness of the controller and its ability to generate a continuous range of gaits. These results suggest that biologically inspired neural-network controllers may be a robust method for robotic control.

  3. Simulation-based intelligent robotic agent for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.

    1990-01-01

    A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.

  4. A Concept of the Differentially Driven Three Wheeled Robot

    NASA Astrophysics Data System (ADS)

    Kelemen, M.; Colville, D. J.; Kelemenová, T.; Virgala, I.; Miková, L.

    2013-08-01

    The paper deals with the concept of a differentially driven three wheeled robot. The main task for the robot is to follow the navigation black line on white ground. The robot also contains anti-collision sensors for avoiding obstacles on track. Students learn how to deal with signals from sensors and how to control DC motors. Students work with the controller and develop the locomotion algorithm and can attend a competition

  5. General visual robot controller networks via artificial evolution

    NASA Astrophysics Data System (ADS)

    Cliff, David; Harvey, Inman; Husbands, Philip

    1993-08-01

    We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.

  6. Teleautonomous guidance for mobile robots

    NASA Technical Reports Server (NTRS)

    Borenstein, J.; Koren, Y.

    1990-01-01

    Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.

  7. A Biologically Inspired Cooperative Multi-Robot Control Architecture

    NASA Technical Reports Server (NTRS)

    Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)

    2002-01-01

    A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.

  8. A Stigmergic Cooperative Multi-Robot Control Architecture

    NASA Technical Reports Server (NTRS)

    Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.

    2004-01-01

    In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.

  9. Characterization of large-area pressure sensitive robot skin

    NASA Astrophysics Data System (ADS)

    Saadatzi, Mohammad Nasser; Baptist, Joshua R.; Wijayasinghe, Indika B.; Popa, Dan O.

    2017-05-01

    Sensorized robot skin has considerable promise to enhance robots' tactile perception of surrounding environments. For physical human-robot interaction (pHRI) or autonomous manipulation, a high spatial sensor density is required, typically driven by the skin location on the robot. In our previous study, a 4x4 flexible array of strain sensors were printed and packaged onto Kapton sheets and silicone encapsulants. In this paper, we are extending the surface area of the patch to larger arrays with up to 128 tactel elements. To address scalability, sensitivity, and calibration challenges, a novel electronic module, free of the traditional signal conditioning circuitry was created. The electronic design relies on a software-based calibration scheme using high-resolution analog-to-digital converters with internal programmable gain amplifiers. In this paper, we first show the efficacy of the proposed method with a 4x4 skin array using controlled pressure tests, and then perform procedures to evaluate each sensor's characteristics such as dynamic force-to-strain property, repeatability, and signal-to-noise-ratio. In order to handle larger sensor surfaces, an automated force-controlled test cycle was carried out. Results demonstrate that our approach leads to reliable and efficient methods for extracting tactile models for use in future interaction with collaborative robots.

  10. Active Sensing System with In Situ Adjustable Sensor Morphology

    PubMed Central

    Nurzaman, Surya G.; Culha, Utku; Brodbeck, Luzius; Wang, Liyu; Iida, Fumiya

    2013-01-01

    Background Despite the widespread use of sensors in engineering systems like robots and automation systems, the common paradigm is to have fixed sensor morphology tailored to fulfill a specific application. On the other hand, robotic systems are expected to operate in ever more uncertain environments. In order to cope with the challenge, it is worthy of note that biological systems show the importance of suitable sensor morphology and active sensing capability to handle different kinds of sensing tasks with particular requirements. Methodology This paper presents a robotics active sensing system which is able to adjust its sensor morphology in situ in order to sense different physical quantities with desirable sensing characteristics. The approach taken is to use thermoplastic adhesive material, i.e. Hot Melt Adhesive (HMA). It will be shown that the thermoplastic and thermoadhesive nature of HMA enables the system to repeatedly fabricate, attach and detach mechanical structures with a variety of shape and size to the robot end effector for sensing purposes. Via active sensing capability, the robotic system utilizes the structure to physically probe an unknown target object with suitable motion and transduce the arising physical stimuli into information usable by a camera as its only built-in sensor. Conclusions/Significance The efficacy of the proposed system is verified based on two results. Firstly, it is confirmed that suitable sensor morphology and active sensing capability enables the system to sense different physical quantities, i.e. softness and temperature, with desirable sensing characteristics. Secondly, given tasks of discriminating two visually indistinguishable objects with respect to softness and temperature, it is confirmed that the proposed robotic system is able to autonomously accomplish them. The way the results motivate new research directions which focus on in situ adjustment of sensor morphology will also be discussed. PMID:24416094

  11. Obstacle-avoiding robot with IR and PIR motion sensors

    NASA Astrophysics Data System (ADS)

    Ismail, R.; Omar, Z.; Suaibun, S.

    2016-10-01

    Obstacle avoiding robot was designed, constructed and programmed which may be potentially used for educational and research purposes. The developed robot will move in a particular direction once the infrared (IR) and the PIR passive infrared (PIR) sensors sense a signal while avoiding the obstacles in its path. The robot can also perform desired tasks in unstructured environments without continuous human guidance. The hardware was integrated in one application board as embedded system design. The software was developed using C++ and compiled by Arduino IDE 1.6.5. The main objective of this project is to provide simple guidelines to the polytechnic students and beginners who are interested in this type of research. It is hoped that this robot could benefit students who wish to carry out research on IR and PIR sensors.

  12. Collision recognition and direction changes for small scale fish robots by acceleration sensors

    NASA Astrophysics Data System (ADS)

    Na, Seung Y.; Shin, Daejung; Kim, Jin Y.; Lee, Bae-Ho

    2005-05-01

    Typical obstacles are walls, rocks, water plants and other nearby robots for a group of small scale fish robots and submersibles that have been constructed in our lab. Sonar sensors are not employed to make the robot structure simple enough. All of circuits, sensors and processor cards are contained in a box of 9 x 7 x 4 cm dimension except motors, fins and external covers. Therefore, image processing results are applied to avoid collisions. However, it is useful only when the obstacles are located far enough to give images processing time for detecting them. Otherwise, acceleration sensors are used to detect collision immediately after it happens. Two of 2-axes acceleration sensors are employed to measure the three components of collision angles, collision magnitudes, and the angles of robot propulsion. These data are integrated to calculate the amount of propulsion direction change. The angle of a collision incident upon an obstacle is the fundamental value to obtain a direction change needed to design a following path. But there is a significant amount of noise due to a caudal fin motor. Because caudal fin provides the main propulsion for a fish robot, there is a periodic swinging noise at the head of a robot. This noise provides a random acceleration effect on the measured acceleration data at the collision. We propose an algorithm which shows that the MEMS-type accelerometers are very effective to provide information for direction changes in spite of the intrinsic noise after the small scale fish robots have made obstacle collision.

  13. Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents

    DTIC Science & Technology

    2016-07-27

    synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot

  14. Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network

    DTIC Science & Technology

    2015-08-14

    theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The...independently evolving research directions based on physics-based models of mechanical, electromechanical and electronic devices, operational constraints

  15. Development of a biomimetic roughness sensor for tactile information with an elastomer

    NASA Astrophysics Data System (ADS)

    Choi, Jae-Young; Kim, Sung Joon; Moon, Hyungpil; Choi, Hyouk Ryeol; Koo, Ja Choon

    2016-04-01

    Human uses various sensational information for identifying an object. When contacting an unidentified object with no vision, tactile sensation provides a variety of information to perceive. Tactile sensation plays an important role to recognize a shape of surfaces from touching. In robotic fields, tactile sensation is especially meaningful. Robots can perform more accurate job using comprehensive tactile information. And in case of using sensors made by soft material like silicone, sensors can be used in various situations. So we are developing a tactile sensor with soft materials. As the conventional robot operates in a controlled environment, it is a good model to make robots more available at any circumstance that sensory systems of living things. For example, there are lots of mechanoreceptors that each of them has different roles detecting simulation in side of human skin tissue. By mimicking the mechanoreceptor, a sensory system can be realized more closely to human being. It is known that human obtains roughness information through scanning the surface with fingertips. During that times, subcutaneous mechanoreceptors detect vibration. In the same way, while a robot is scanning a surface of object, a roughness sensor developed detects vibrations generated between contacting two surfaces. In this research, a roughness sensor made by an elastomer was developed and experiment for perception of objects was conducted. We describe means to compare the roughness of objects with a newly developed sensor.

  16. Real-time, wide-area hyperspectral imaging sensors for standoff detection of explosives and chemical warfare agents

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Tazik, Shawna; Gardner, Charles W.; Nelson, Matthew P.

    2017-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the detection and analysis of targets located within complex backgrounds. HSI can detect threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Unfortunately, current generation HSI systems have size, weight, and power limitations that prohibit their use for field-portable and/or real-time applications. Current generation systems commonly provide an inefficient area search rate, require close proximity to the target for screening, and/or are not capable of making real-time measurements. ChemImage Sensor Systems (CISS) is developing a variety of real-time, wide-field hyperspectral imaging systems that utilize shortwave infrared (SWIR) absorption and Raman spectroscopy. SWIR HSI sensors provide wide-area imagery with at or near real time detection speeds. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focusing on sensor design and detection results.

  17. Embedded mobile farm robot for identification of diseased plants

    NASA Astrophysics Data System (ADS)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  18. A New Localization System for Indoor Service Robots in Low Luminance and Slippery Indoor Environment Using Afocal Optical Flow Sensor Based Sensor Fusion.

    PubMed

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan

    2018-01-10

    In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively.

  19. A New Localization System for Indoor Service Robots in Low Luminance and Slippery Indoor Environment Using Afocal Optical Flow Sensor Based Sensor Fusion

    PubMed Central

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il “Dan”

    2018-01-01

    In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively. PMID:29320414

  20. Sensing and data classification for a robotic meteorite search

    NASA Astrophysics Data System (ADS)

    Pedersen, Liam; Apostolopoulos, Dimi; Whittaker, William L.; Benedix, Gretchen; Rousch, Ted

    1999-01-01

    Upcoming missions to Mars and the mon call for highly autonomous robots with capability to perform intra-site exploration, reason about their scientific finds, and perform comprehensive on-board analysis of data collected. An ideal case for testing such technologies and robot capabilities is the robotic search for Antarctic meteorites. The successful identification and classification of meteorites depends on sensing modalities and intelligent evaluation of acquired data. Data from color imagery and spectroscopic measurements are used to identify terrestrial rocks and distinguish them from meteorites. However, because of the large number of rocks and the high cost and delay of using some of the sensors, it is necessary to eliminate as many meteorite candidates as possible using cheap long range sensors, such as color cameras. More resource consuming sensor will be held in reserve for the more promising samples only. Bayes networks are used as the formalism for incrementally combing data from multiple sources in a statistically rigorous manner. Furthermore, they can be used to infer the utility of further sensor readings given currently known data. This information, along with cost estimates, in necessary for the sensing system to rationally schedule further sensor reading sand deployments. This paper address issues associated with sensor selection and implementation of an architecture for automatic identification of rocks and meteorites from a mobile robot.

  1. A close inspection and vibration sensing aerial robot for steel structures with an EPM-based landing device

    NASA Astrophysics Data System (ADS)

    Takeuchi, Kazuya; Masuda, Arata; Akahori, Shunsuke; Higashi, Yoshiyuki; Miura, Nanako

    2017-04-01

    This paper proposes an aerial robot that can land on and cling to a steel structure using electric permanent magnets to be- have as a vibration sensor probe for use in vibration-based structural health monitoring. In the last decade, structural health monitoring techniques have been studied intensively to tackle with serious social issues that most of the infrastructures in advanced countries are being deteriorated. In the typical concept of the structural health monitoring, vibration sensors like accelerometers are installed in the structure to continuously collect the dynamical response of the operating structure to find a symptom of the structural damage. It is unreasonable, however, to permanently deploy the sensors to numerous infrastructures because most of the infrastructures except for those of primary importance do not need continuous measurement and evaluation. In this study, the aerial robot plays a role of a mobile detachable sensor unit. The design guidelines of the aerial robot that performs the vibration measurement from the analysis model of the robot is shown. Experiments to evaluate the frequency response function of the acceleration measured by the robot with respect to the acceleration at the point where the robot adheres are carried out. And the experimental results show that the prototype robot can measure the acceleration of the host structure accurately up to 150 Hz.

  2. Simultaneous Soft Sensing of Tissue Contact Angle and Force for Millimeter-scale Medical Robots

    PubMed Central

    Arabagi, Veaceslav; Gosline, Andrew; Wood, Robert J.; Dupont, Pierre E.

    2013-01-01

    A novel robotic sensor is proposed to measure both the contact angle and the force acting between the tip of a surgical robot and soft tissue. The sensor is manufactured using a planar lithography process that generates microchannels that are subsequently filled with a conductive liquid. The planar geometry is then molded onto a hemispherical plastic scaffolding in a geometric configuration enabling estimation of the contact angle (angle between robot tip tangent and tissue surface normal) by the rotation of the sensor around its roll axis. Contact force can also be estimated by monitoring the changes in resistance in each microchannel. Bench top experimental results indicate that, on average, the sensor can estimate the angle of contact to within ±2° and the contact force to within ±5.3 g. PMID:24241496

  3. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System

    PubMed Central

    Milde, Moritz B.; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware. PMID:28747883

  4. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System.

    PubMed

    Milde, Moritz B; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.

  5. Fabrication and characterization of bending and pressure sensors for a soft prosthetic hand

    NASA Astrophysics Data System (ADS)

    Rocha, Rui Pedro; Alhais Lopes, Pedro; de Almeida, Anibal T.; Tavakoli, Mahmoud; Majidi, Carmel

    2018-03-01

    We demonstrate fabrication, characterization, and implementation of ‘soft-matter’ pressure and bending sensors for a soft robotic hand. The elastomer-based sensors are embedded in a robot finger composed of a 3D printed endoskeleton and covered by an elastomeric skin. Two types of sensors are evaluated, resistive pressure sensors and capacitive pressure sensors. The sensor is fabricated entirely out of insulating and conductive rubber, the latter composed of polydimethylsiloxane (PDMS) elastomer embedded with a percolating network of structured carbon black (CB). The sensor-integrated fingers have a simple materials architecture, can be fabricated with standard rapid prototyping methods, and are inexpensive to produce. When incorporated into a robotic hand, the CB-PDMS sensors and PDMS carrier medium function as an ‘artificial skin’ for touch and bend detection. Results show improved response with a capacitive sensor architecture, which, unlike a resistive sensor, is robust to electromechanical hysteresis, creep, and drift in the CB-PDMS composite. The sensorized fingers are integrated in an anthropomorphic hand and results for a variety of grasping tasks are presented.

  6. Shape sensing for torsionally compliant concentric-tube robots

    NASA Astrophysics Data System (ADS)

    Xu, Ran; Yurkewich, Aaron; Patel, Rajni V.

    2016-03-01

    Concentric-tube robots (CTR) consist of a series of pre-curved flexible tubes that make up the robot structure and provide the high dexterity required for performing surgical tasks in constrained environments. This special design introduces new challenges in shape sensing as large twisting is experienced by the torsionally compliant structure. In the literature, fiber Bragg grating (FBG) sensors are attached to needle-sized continuum robots for curvature sensing, but they are limited to obtaining bending curvatures since a straight sensor layout is utilized. For a CTR, in addition to bending curvatures, the torsion along the robots shaft should be determined to calculate the shape and pose of the robot accurately. To solve this problem, in our earlier work, we proposed embedding FBG sensors in a helical pattern into the tube wall. The strain readings are converted to bending curvatures and torsion by a strain-curvature model. In this paper, a modified strain-curvature model is proposed that can be used in conjunction with standard shape reconstruction algorithms for shape and pose calculation. This sensing technology is evaluated for its accuracy and resolution using three FBG sensors with 1 mm sensing segments that are bonded into the helical grooves of a pre-curved Nitinol tube. The results show that this sensorized robot can obtain accurate measurements: resolutions of 0.02 rad/m with a 100 Hz sampling rate. Further, the repeatability of the obtained measurements during loading and unloading conditions are presented and analyzed.

  7. Mapping planetary caves with an autonomous, heterogeneous robot team

    NASA Astrophysics Data System (ADS)

    Husain, Ammar; Jones, Heather; Kannan, Balajee; Wong, Uland; Pimentel, Tiago; Tang, Sarah; Daftry, Shreyansh; Huber, Steven; Whittaker, William L.

    Caves on other planetary bodies offer sheltered habitat for future human explorers and numerous clues to a planet's past for scientists. While recent orbital imagery provides exciting new details about cave entrances on the Moon and Mars, the interiors of these caves are still unknown and not observable from orbit. Multi-robot teams offer unique solutions for exploration and modeling subsurface voids during precursor missions. Robot teams that are diverse in terms of size, mobility, sensing, and capability can provide great advantages, but this diversity, coupled with inherently distinct low-level behavior architectures, makes coordination a challenge. This paper presents a framework that consists of an autonomous frontier and capability-based task generator, a distributed market-based strategy for coordinating and allocating tasks to the different team members, and a communication paradigm for seamless interaction between the different robots in the system. Robots have different sensors, (in the representative robot team used for testing: 2D mapping sensors, 3D modeling sensors, or no exteroceptive sensors), and varying levels of mobility. Tasks are generated to explore, model, and take science samples. Based on an individual robot's capability and associated cost for executing a generated task, a robot is autonomously selected for task execution. The robots create coarse online maps and store collected data for high resolution offline modeling. The coordination approach has been field tested at a mock cave site with highly-unstructured natural terrain, as well as an outdoor patio area. Initial results are promising for applicability of the proposed multi-robot framework to exploration and modeling of planetary caves.

  8. Six component robotic force-torque sensor

    NASA Technical Reports Server (NTRS)

    Grahn, Allen R.; Hutchings, Brad L.; Johnston, David R.; Parsons, David C.; Wyatt, Roland F.

    1987-01-01

    The results of a two-phase contract studying the feasibility of a miniaturized six component force-torque sensor and development of a working laboratory system were described. The principle of operation is based upon using ultrasonic pulse-echo ranging to determine the position of ultrasonic reflectors attached to a metal or ceramic cover plate. Because of the small size of the sensor, this technology may have application in robotics, to sense forces and torques at the finger tip of a robotic end effector. Descriptions are included of laboratory experiments evaluating materials and techniques for sensor fabrication and of the development of support electronics for data acquisition, computer interface, and operator display.

  9. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  10. Design of an auto change mechanism and intelligent gripper for the space station

    NASA Technical Reports Server (NTRS)

    Dehoff, Paul H.; Naik, Dipak P.

    1989-01-01

    Robot gripping of objects in space is inherently demanding and dangerous and nowhere is this more clearly reflected than in the design of the robot gripper. An object which escapes the gripper in a micro g environment is launched not dropped. To prevent this, the gripper must have sensors and signal processing to determine that the object is properly grasped, e.g., grip points and gripping forces and, if not, to provide information to the robot to enable closed loop corrections to be made. The sensors and sensor strategies employed in the NASA/GSFC Split-Rail Parallel Gripper are described. Objectives and requirements are given followed by the design of the sensor suite, sensor fusion techniques and supporting algorithms.

  11. Application requirements for Robotic Nursing Assistants in hospital environments

    NASA Astrophysics Data System (ADS)

    Cremer, Sven; Doelling, Kris; Lundberg, Cody L.; McNair, Mike; Shin, Jeongsik; Popa, Dan

    2016-05-01

    In this paper we report on analysis toward identifying design requirements for an Adaptive Robotic Nursing Assistant (ARNA). Specifically, the paper focuses on application requirements for ARNA, envisioned as a mobile assistive robot that can navigate hospital environments to perform chores in roles such as patient sitter and patient walker. The role of a sitter is primarily related to patient observation from a distance, and fetching objects at the patient's request, while a walker provides physical assistance for ambulation and rehabilitation. The robot will be expected to not only understand nurse and patient intent but also close the decision loop by automating several routine tasks. As a result, the robot will be equipped with sensors such as distributed pressure sensitive skins, 3D range sensors, and so on. Modular sensor and actuator hardware configured in the form of several multi-degree-of-freedom manipulators, and a mobile base are expected to be deployed in reconfigurable platforms for physical assistance tasks. Furthermore, adaptive human-machine interfaces are expected to play a key role, as they directly impact the ability of robots to assist nurses in a dynamic and unstructured environment. This paper discusses required tasks for the ARNA robot, as well as sensors and software infrastructure to carry out those tasks in the aspects of technical resource availability, gaps, and needed experimental studies.

  12. Navigation system for autonomous mapper robots

    NASA Astrophysics Data System (ADS)

    Halbach, Marc; Baudoin, Yvan

    1993-05-01

    This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.

  13. Development and validation of a low-cost mobile robotics testbed

    NASA Astrophysics Data System (ADS)

    Johnson, Michael; Hayes, Martin J.

    2012-03-01

    This paper considers the design, construction and validation of a low-cost experimental robotic testbed, which allows for the localisation and tracking of multiple robotic agents in real time. The testbed system is suitable for research and education in a range of different mobile robotic applications, for validating theoretical as well as practical research work in the field of digital control, mobile robotics, graphical programming and video tracking systems. It provides a reconfigurable floor space for mobile robotic agents to operate within, while tracking the position of multiple agents in real-time using the overhead vision system. The overall system provides a highly cost-effective solution to the topical problem of providing students with practical robotics experience within severe budget constraints. Several problems encountered in the design and development of the mobile robotic testbed and associated tracking system, such as radial lens distortion and the selection of robot identifier templates are clearly addressed. The testbed performance is quantified and several experiments involving LEGO Mindstorm NXT and Merlin System MiaBot robots are discussed.

  14. Durable Tactile Glove for Human or Robot Hand

    NASA Technical Reports Server (NTRS)

    Butzer, Melissa; Diftler, Myron A.; Huber, Eric

    2010-01-01

    A glove containing force sensors has been built as a prototype of tactile sensor arrays to be worn on human hands and anthropomorphic robot hands. The force sensors of this glove are mounted inside, in protective pockets; as a result of this and other design features, the present glove is more durable than earlier models.

  15. Agent independent task planning

    NASA Technical Reports Server (NTRS)

    Davis, William S.

    1990-01-01

    Agent-Independent Planning is a technique that allows the construction of activity plans without regard to the agent that will perform them. Once generated, a plan is then validated and translated into instructions for a particular agent, whether a robot, crewmember, or software-based control system. Because Space Station Freedom (SSF) is planned for orbital operations for approximately thirty years, it will almost certainly experience numerous enhancements and upgrades, including upgrades in robotic manipulators. Agent-Independent Planning provides the capability to construct plans for SSF operations, independent of specific robotic systems, by combining techniques of object oriented modeling, nonlinear planning and temporal logic. Since a plan is validated using the physical and functional models of a particular agent, new robotic systems can be developed and integrated with existing operations in a robust manner. This technique also provides the capability to generate plans for crewmembers with varying skill levels, and later apply these same plans to more sophisticated robotic manipulators made available by evolutions in technology.

  16. Path planning in GPS-denied environments via collective intelligence of distributed sensor networks

    NASA Astrophysics Data System (ADS)

    Jha, Devesh K.; Chattopadhyay, Pritthi; Sarkar, Soumik; Ray, Asok

    2016-05-01

    This paper proposes a framework for reactive goal-directed navigation without global positioning facilities in unknown dynamic environments. A mobile sensor network is used for localising regions of interest for path planning of an autonomous mobile robot. The underlying theory is an extension of a generalised gossip algorithm that has been recently developed in a language-measure-theoretic setting. The algorithm has been used to propagate local decisions of target detection over a mobile sensor network and thus, it generates a belief map for the detected target over the network. In this setting, an autonomous mobile robot may communicate only with a few mobile sensing nodes in its own neighbourhood and localise itself relative to the communicating nodes with bounded uncertainties. The robot makes use of the knowledge based on the belief of the mobile sensors to generate a sequence of way-points, leading to a possible goal. The estimated way-points are used by a sampling-based motion planning algorithm to generate feasible trajectories for the robot. The proposed concept has been validated by numerical simulation on a mobile sensor network test-bed and a Dubin's car-like robot.

  17. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  18. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  19. Automatic monitoring of vibration welding equipment

    DOEpatents

    Spicer, John Patrick; Chakraborty, Debejyo; Wincek, Michael Anthony; Wang, Hui; Abell, Jeffrey A; Bracey, Jennifer; Cai, Wayne W

    2014-10-14

    A vibration welding system includes vibration welding equipment having a welding horn and anvil, a host device, a check station, and a robot. The robot moves the horn and anvil via an arm to the check station. Sensors, e.g., temperature sensors, are positioned with respect to the welding equipment. Additional sensors are positioned with respect to the check station, including a pressure-sensitive array. The host device, which monitors a condition of the welding equipment, measures signals via the sensors positioned with respect to the welding equipment when the horn is actively forming a weld. The robot moves the horn and anvil to the check station, activates the check station sensors at the check station, and determines a condition of the welding equipment by processing the received signals. Acoustic, force, temperature, displacement, amplitude, and/or attitude/gyroscopic sensors may be used.

  20. Control of a Quadcopter Aerial Robot Using Optic Flow Sensing

    NASA Astrophysics Data System (ADS)

    Hurd, Michael Brandon

    This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).

  1. Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots

    PubMed Central

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-01-01

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933

  2. Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.

    PubMed

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-10-21

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.

  3. Robotic Thumb Assembly

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Bridgwater, Lyndon (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Goza, S. Michael (Inventor)

    2013-01-01

    An improved robotic thumb for a robotic hand assembly is provided. According to one aspect of the disclosure, improved tendon routing in the robotic thumb provides control of four degrees of freedom with only five tendons. According to another aspect of the disclosure, one of the five degrees of freedom of a human thumb is replaced in the robotic thumb with a permanent twist in the shape of a phalange. According to yet another aspect of the disclosure, a position sensor includes a magnet having two portions shaped as circle segments with different center points. The magnet provides a linearized output from a Hall effect sensor.

  4. Indoor Navigation using Direction Sensor and Beacons

    NASA Technical Reports Server (NTRS)

    Shields, Joel; Jeganathan, Muthu

    2004-01-01

    A system for indoor navigation of a mobile robot includes (1) modulated infrared beacons at known positions on the walls and ceiling of a room and (2) a cameralike sensor, comprising a wide-angle lens with a position-sensitive photodetector at the focal plane, mounted in a known position and orientation on the robot. The system also includes a computer running special-purpose software that processes the sensor readings to obtain the position and orientation of the robot in all six degrees of freedom in a coordinate system embedded in the room.

  5. Magneto-inductive skin sensor for robot collision avoidance: A new development

    NASA Technical Reports Server (NTRS)

    Chauhan, D. S.; Dehoff, Paul H.

    1989-01-01

    Safety is a primary concern for robots operating in space. The tri-mode sensor addresses that concern by employing a collision avoidance/management skin around the robot arms. This rf-based skin sensor is at present a dual mode (proximity and tactile). The third mode, pyroelectric, will complement the other two. The proximity mode permits the robot to sense an intruding object, to range the object, and to detect the edges of the object. The tactile mode permits the robot to sense when it has contacted an object, where on the arm it has made contact, and provides a three-dimensional image of the shape of the contact impression. The pyroelectric mode will be added to permit the robot arm to detect the proximity of a hot object and to add sensing redundancy to the two other modes. The rf-modes of the sensing skin are presented. These modes employ a highly efficient magnetic material (amorphous metal) in a sensing technique. This results in a flexible sensor array which uses a primarily inductive configuration to permit both capacitive and magnetoinductive sensing of object; thus optimizing performance in both proximity and tactile modes with the same sensing skin. The fundamental operating principles, design particulars, and theoretical models are provided to aid in the description and understanding of this sensor. Test results are also given.

  6. Fusing Laser Reflectance and Image Data for Terrain Classification for Small Autonomous Robots

    DTIC Science & Technology

    2014-12-01

    limit us to low power, lightweight sensors , and a maximum range of approximately 5 meters. Contrast these robot characteristics to typical terrain...classifi- cation work which uses large autonomous ground vehicles with sensors mounted high above the ground. Terrain classification for small autonomous...into predefined classes [10], [11]. However, wheeled vehicles offer the ability to use non-traditional sensors such as vibration sensors [12] and

  7. Reflexive obstacle avoidance for kinematically-redundant manipulators

    NASA Technical Reports Server (NTRS)

    Karlen, James P.; Thompson, Jack M., Jr.; Farrell, James D.; Vold, Havard I.

    1989-01-01

    Dexterous telerobots incorporating 17 or more degrees of freedom operating under coordinated, sensor-driven computer control will play important roles in future space operations. They will also be used on Earth in assignments like fire fighting, construction and battlefield support. A real time, reflexive obstacle avoidance system, seen as a functional requirement for such massively redundant manipulators, was developed using arm-mounted proximity sensors to control manipulator pose. The project involved a review and analysis of alternative proximity sensor technologies for space applications, the development of a general-purpose algorithm for synthesizing sensor inputs, and the implementation of a prototypical system for demonstration and testing. A 7 degree of freedom Robotics Research K-2107HR manipulator was outfitted with ultrasonic proximity sensors as a testbed, and Robotics Research's standard redundant motion control algorithm was modified such that an object detected by sensor arrays located at the elbow effectively applies a force to the manipulator elbow, normal to the axis. The arm is repelled by objects detected by the sensors, causing the robot to steer around objects in the workspace automatically while continuing to move its tool along the commanded path without interruption. The mathematical approach formulated for synthesizing sensor inputs can be employed for redundant robots of any kinematic configuration.

  8. Can robots be responsible moral agents? And why should we care?

    NASA Astrophysics Data System (ADS)

    Sharkey, Amanda

    2017-07-01

    This principle highlights the need for humans to accept responsibility for robot behaviour and in that it is commendable. However, it raises further questions about legal and moral responsibility. The issues considered here are (i) the reasons for assuming that humans and not robots are responsible agents, (ii) whether it is sufficient to design robots to comply with existing laws and human rights and (iii) the implications, for robot deployment, of the assumption that robots are not morally responsible.

  9. Algorithms of walking and stability for an anthropomorphic robot

    NASA Astrophysics Data System (ADS)

    Sirazetdinov, R. T.; Devaev, V. M.; Nikitina, D. V.; Fadeev, A. Y.; Kamalov, A. R.

    2017-09-01

    Autonomous movement of an anthropomorphic robot is considered as a superposition of a set of typical elements of movement - so-called patterns, each of which can be considered as an agent of some multi-agent system [ 1 ]. To control the AP-601 robot, an information and communication infrastructure has been created that represents some multi-agent system that allows the development of algorithms for individual patterns of moving and run them in the system as a set of independently executed and interacting agents. The algorithms of lateral movement of the anthropomorphic robot AP-601 series with active stability due to the stability pattern are presented.

  10. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  11. A magneto-sensitive skin for robots in space

    NASA Technical Reports Server (NTRS)

    Chauhan, D. S.; Dehoff, P. H.

    1991-01-01

    The development of a robot arm proximity sensing skin that can sense intruding objects is described. The purpose of the sensor would be to prevent the robot from colliding with objects in space including human beings. Eventually a tri-mode system in envisioned including proximity, tactile, and thermal. To date the primary emphasis was on the proximity sensor which evolved from one based on magneto-inductive principles to the current design which is based on a capacitive-reflector system. The capacitive sensing element, backed by a reflector driven at the same voltage and in phase with the sensor, is used to reflect field lines away from the grounded robot toward the intruding object. This results in an increased sensing range of up to 12 in. with the reflector on compared with only 1 in. with it off. It is believed that this design advances the state-of-the-art in capacitive sensor performance.

  12. Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor

    NASA Technical Reports Server (NTRS)

    Prinz, F. B. S.; Mahalingam, S.

    1992-01-01

    A capacitance based proximity sensor, the 'Capaciflector' (Vranish 92), has been developed at the Goddard Space Flight Center of NASA. We had investigated the use of this sensor for avoiding and maneuvering around unexpected objects (Mahalingam 92). The approach developed there would help in executing collision-free gross motions. Another important aspect of robot motion planning is fine motion planning. Let us classify manipulator robot motion planning into two groups at the task level: gross motion planning and fine motion planning. We use the term 'gross planning' where the major degrees of freedom of the robot execute large motions, for example, the motion of a robot in a pick and place type operation. We use the term 'fine motion' to indicate motions of the robot where the large dofs do not move much, and move far less than the mirror dofs, such as in inserting a peg in a hole. In this report we describe our experiments and experiences in this area.

  13. Synthetic Ion Channels and DNA Logic Gates as Components of Molecular Robots.

    PubMed

    Kawano, Ryuji

    2018-02-19

    A molecular robot is a next-generation biochemical machine that imitates the actions of microorganisms. It is made of biomaterials such as DNA, proteins, and lipids. Three prerequisites have been proposed for the construction of such a robot: sensors, intelligence, and actuators. This Minireview focuses on recent research on synthetic ion channels and DNA computing technologies, which are viewed as potential candidate components of molecular robots. Synthetic ion channels, which are embedded in artificial cell membranes (lipid bilayers), sense ambient ions or chemicals and import them. These artificial sensors are useful components for molecular robots with bodies consisting of a lipid bilayer because they enable the interface between the inside and outside of the molecular robot to function as gates. After the signal molecules arrive inside the molecular robot, they can operate DNA logic gates, which perform computations. These functions will be integrated into the intelligence and sensor sections of molecular robots. Soon, these molecular machines will be able to be assembled to operate as a mass microrobot and play an active role in environmental monitoring and in vivo diagnosis or therapy. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A Passive Learning Sensor Architecture for Multimodal Image Labeling: An Application for Social Robots.

    PubMed

    Gutiérrez, Marco A; Manso, Luis J; Pandya, Harit; Núñez, Pedro

    2017-02-11

    Object detection and classification have countless applications in human-robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches.

  15. Multi-Sensor Person Following in Low-Visibility Scenarios

    PubMed Central

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. PMID:22163506

  16. Mobile robotic sensors for perimeter detection and tracking.

    PubMed

    Clark, Justin; Fierro, Rafael

    2007-02-01

    Mobile robot/sensor networks have emerged as tools for environmental monitoring, search and rescue, exploration and mapping, evaluation of civil infrastructure, and military operations. These networks consist of many sensors each equipped with embedded processors, wireless communication, and motion capabilities. This paper describes a cooperative mobile robot network capable of detecting and tracking a perimeter defined by a certain substance (e.g., a chemical spill) in the environment. Specifically, the contributions of this paper are twofold: (i) a library of simple reactive motion control algorithms and (ii) a coordination mechanism for effectively carrying out perimeter-sensing missions. The decentralized nature of the methodology implemented could potentially allow the network to scale to many sensors and to reconfigure when adding/deleting sensors. Extensive simulation results and experiments verify the validity of the proposed cooperative control scheme.

  17. Multi-sensor person following in low-visibility scenarios.

    PubMed

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment.

  18. Optimal accelerometer placement on a robot arm for pose estimation

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Sanford, Joseph D.; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Das, Sumit K.; Popa, Dan O.

    2017-05-01

    The performance of robots to carry out tasks depends in part on the sensor information they can utilize. Usually, robots are fitted with angle joint encoders that are used to estimate the position and orientation (or the pose) of its end-effector. However, there are numerous situations, such as in legged locomotion, mobile manipulation, or prosthetics, where such joint sensors may not be present at every, or any joint. In this paper we study the use of inertial sensors, in particular accelerometers, placed on the robot that can be used to estimate the robot pose. Studying accelerometer placement on a robot involves many parameters that affect the performance of the intended positioning task. Parameters such as the number of accelerometers, their size, geometric placement and Signal-to-Noise Ratio (SNR) are included in our study of their effects for robot pose estimation. Due to the ubiquitous availability of inexpensive accelerometers, we investigated pose estimation gains resulting from using increasingly large numbers of sensors. Monte-Carlo simulations are performed with a two-link robot arm to obtain the expected value of an estimation error metric for different accelerometer configurations, which are then compared for optimization. Results show that, with a fixed SNR model, the pose estimation error decreases with increasing number of accelerometers, whereas for a SNR model that scales inversely to the accelerometer footprint, the pose estimation error increases with the number of accelerometers. It is also shown that the optimal placement of the accelerometers depends on the method used for pose estimation. The findings suggest that an integration-based method favors placement of accelerometers at the extremities of the robot links, whereas a kinematic-constraints-based method favors a more uniformly distributed placement along the robot links.

  19. Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.

    PubMed

    Felt, Wyatt; Chin, Khai Yi; Remy, C David

    2017-09-01

    This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.

  20. Complete low-cost implementation of a teleoperated control system for a humanoid robot.

    PubMed

    Cela, Andrés; Yebes, J Javier; Arroyo, Roberto; Bergasa, Luis M; Barea, Rafael; López, Elena

    2013-01-24

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.

  1. Self calibrating autoTRAC

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1994-01-01

    The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.

  2. Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot

    PubMed Central

    Cela, Andrés; Yebes, J. Javier; Arroyo, Roberto; Bergasa, Luis M.; Barea, Rafael; López, Elena

    2013-01-01

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. PMID:23348029

  3. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration

    PubMed Central

    Losada, Diego P.; Fernández, Joaquín L.; Paz, Enrique; Sanz, Rafael

    2017-01-01

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead. PMID:28467381

  4. Application of historical mobility testing to sensor-based robotic performance

    NASA Astrophysics Data System (ADS)

    Willoughby, William E.; Jones, Randolph A.; Mason, George L.; Shoop, Sally A.; Lever, James H.

    2006-05-01

    The USA Engineer Research and Development Center (ERDC) has conducted on-/off-road experimental field testing with full-sized and scale-model military vehicles for more than fifty years. Some 4000 acres of local terrain are available for tailored field evaluations or verification/validation of future robotic designs in a variety of climatic regimes. Field testing and data collection procedures, as well as techniques for quantifying terrain in engineering terms, have been developed and refined into algorithms and models for predicting vehicle-terrain interactions and resulting forces or speeds of military-sized vehicles. Based on recent experiments with Matilda, Talon, and Pacbot, these predictive capabilities appear to be relevant to most robotic systems currently in development. Utilization of current testing capabilities with sensor-based vehicle drivers, or use of the procedures for terrain quantification from sensor data, would immediately apply some fifty years of historical knowledge to the development, refinement, and implementation of future robotic systems. Additionally, translation of sensor-collected terrain data into engineering terms would allow assessment of robotic performance a priori deployment of the actual system and ensure maximum system performance in the theater of operation.

  5. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration.

    PubMed

    Losada, Diego P; Fernández, Joaquín L; Paz, Enrique; Sanz, Rafael

    2017-05-03

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead.

  6. Hand Gesture Based Wireless Robotic Arm Control for Agricultural Applications

    NASA Astrophysics Data System (ADS)

    Kannan Megalingam, Rajesh; Bandhyopadhyay, Shiva; Vamsy Vivek, Gedela; Juned Rahi, Muhammad

    2017-08-01

    One of the major challenges in agriculture is harvesting. It is very hard and sometimes even unsafe for workers to go to each plant and pluck fruits. Robotic systems are increasingly combined with new technologies to automate or semi automate labour intensive work, such as e.g. grape harvesting. In this work we propose a semi-automatic method for aid in harvesting fruits and hence increase productivity per man hour. A robotic arm fixed to a rover roams in the in orchard and the user can control it remotely using the hand glove fixed with various sensors. These sensors can position the robotic arm remotely to harvest the fruits. In this paper we discuss the design of hand glove fixed with various sensors, design of 4 DoF robotic arm and the wireless control interface. In addition the setup of the system and the testing and evaluation under lab conditions are also presented in this paper.

  7. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    PubMed Central

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  8. Autonomous Mobile Platform for Research in Cooperative Robotics

    NASA Technical Reports Server (NTRS)

    Daemi, Ali; Pena, Edward; Ferguson, Paul

    1998-01-01

    This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.

  9. Adaptive Sampling of Time Series During Remote Exploration

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models are stationary, e.g., the covariance relationships are time-invariant. In such cases, information gain is independent of previously collected data, and the optimal solution can always be computed in advance. Information-optimal sampling of a stationary GP time series thus reduces to even spacing, and such models are not appropriate for tracking localized anomalies. Additionally, GP model inference can be computationally expensive.

  10. Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Purba, H. A.; Efendi, S.; Fahmi, F.

    2017-03-01

    Fire disasters can occur anytime and result in high losses. It is often that fire fighters cannot access the source of fire due to the damage of building and very high temperature, or even due to the presence of explosive materials. With such constraints and high risk in the handling of the fire, a technological breakthrough that can help fighting the fire is necessary. Our paper proposed the use of robots to extinguish the fire that can be controlled from a specified distance in order to reduce the risk. A fire extinguisher robot was assembled with the intention to extinguish the fire by using a water pump as actuators. The robot movement was controlled using Android smartphones via Wi-fi networks utilizing Wi-fi module contained in the robot. User commands were sent to the microcontroller on the robot and then translated into robotic movement. We used ATMega8 as main microcontroller in the robot. The robot was equipped with cameras and ultrasonic sensors. The camera played role in giving feedback to user and in finding the source of fire. Ultrasonic sensors were used to avoid collisions during movement. Feedback provided by camera on the robot displayed on a screen of smartphone. In lab, testing environment the robot can move following the user command such as turn right, turn left, forward and backward. The ultrasonic sensors worked well that the robot can be stopped at a distance of less than 15 cm. In the fire test, the robot can perform the task properly to extinguish the fire.

  11. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    PubMed Central

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles. PMID:29046651

  12. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social.

    PubMed

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.

  13. Sensor-based fine telemanipulation for space robotics

    NASA Technical Reports Server (NTRS)

    Andrenucci, M.; Bergamasco, M.; Dario, P.

    1989-01-01

    The control of a multifingered hand slave in order to accurately exert arbitrary forces and impart small movements to a grasped object is, at present, a knotty problem in teleoperation. Although a number of articulated robotic hands have been proposed in the recent past for dexterous manipulation in autonomous robots, the possible use of such hands as slaves in teleoperated manipulation is hindered by the present lack of sensors in those hands, and (even if those sensors were available) by the inherent difficulty of transmitting to the master operator the complex sensations elicited by such sensors at the slave level. An analysis of different problems related to sensor-based telemanipulation is presented. The general sensory systems requirements for dexterous slave manipulators are pointed out and the description of a practical sensory system set-up for the developed robotic system is presented. The problem of feeding back to the human master operator stimuli that can be interpreted by his central nervous system as originated during real dexterous manipulation is then considered. Finally, some preliminary work aimed at developing an instrumented glove designed purposely for commanding the master operation and incorporating Kevlar tendons and tension sensors, is discussed.

  14. A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring

    PubMed Central

    Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio

    2016-01-01

    This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505

  15. Finger-Shaped GelForce: Sensor for Measuring Surface Traction Fields for Robotic Hand.

    PubMed

    Sato, K; Kamiyama, K; Kawakami, N; Tachi, S

    2010-01-01

    It is believed that the use of haptic sensors to measure the magnitude, direction, and distribution of a force will enable a robotic hand to perform dexterous operations. Therefore, we develop a new type of finger-shaped haptic sensor using GelForce technology. GelForce is a vision-based sensor that can be used to measure the distribution of force vectors, or surface traction fields. The simple structure of the GelForce enables us to develop a compact finger-shaped GelForce for the robotic hand. GelForce that is developed on the basis of an elastic theory can be used to calculate surface traction fields using a conversion equation. However, this conversion equation cannot be analytically solved when the elastic body of the sensor has a complicated shape such as the shape of a finger. Therefore, we propose an observational method and construct a prototype of the finger-shaped GelForce. By using this prototype, we evaluate the basic performance of the finger-shaped GelForce. Then, we conduct a field test by performing grasping operations using a robotic hand. The results of this test show that using the observational method, the finger-shaped GelForce can be successfully used in a robotic hand.

  16. Robotic System For Greenhouse Or Nursery

    NASA Technical Reports Server (NTRS)

    Gill, Paul; Montgomery, Jim; Silver, John; Heffelfinger, Neil; Simonton, Ward; Pease, Jim

    1993-01-01

    Report presents additional information about robotic system described in "Robotic Gripper With Force Control And Optical Sensors" (MFS-28537). "Flexible Agricultural Robotics Manipulator System" (FARMS) serves as prototype of robotic systems intended to enhance productivities of agricultural assembly-line-type facilities in large commercial greenhouses and nurseries.

  17. An implementation of sensor-based force feedback in a compact laparoscopic surgery robot.

    PubMed

    Lee, Duk-Hee; Choi, Jaesoon; Park, Jun-Woo; Bach, Du-Jin; Song, Seung-Jun; Kim, Yoon-Ho; Jo, Yungho; Sun, Kyung

    2009-01-01

    Despite the rapid progress in the clinical application of laparoscopic surgery robots, many shortcomings have not yet been fully overcome, one of which is the lack of reliable haptic feedback. This study implemented a force-feedback structure in our compact laparoscopic surgery robot. The surgery robot is a master-slave configuration robot with 5 DOF (degree of freedom corresponding laparoscopic surgical motion. The force-feedback implementation was made in the robot with torque sensors and controllers installed in the pitch joint of the master and slave robots. A simple dynamic model of action-reaction force in the slave robot was used, through which the reflective force was estimated and fed back to the master robot. The results showed the system model could be identified with significant fidelity and the force feedback at the master robot was feasible. However, the qualitative human assessment of the fed-back force showed only limited level of object discrimination ability. Further developments are underway with this result as a framework.

  18. Planar and finger-shaped optical tactile sensors for robotic applications

    NASA Technical Reports Server (NTRS)

    Begej, Stefan

    1988-01-01

    Progress is described regarding the development of optical tactile sensors specifically designed for application to dexterous robotics. These sensors operate on optical principles involving the frustration of total internal reflection at a waveguide/elastomer interface and produce a grey-scale tactile image that represents the normal (vertical) forces of contact. The first tactile sensor discussed is a compact, 32 x 32 planar sensor array intended for mounting on a parallel-jaw gripper. Optical fibers were employed to convey the tactile image to a CCD camera and microprocessor-based image analysis system. The second sensor had the shape and size of a human fingertip and was designed for a dexterous robotic hand. It contained 256 sensing sites (taxels) distributed in a dual-density pattern that included a tactile fovea near the tip measuring 13 x 13 mm and containing 169 taxels. The design and construction details of these tactile sensors are presented, in addition to photographs of tactile imprints.

  19. A Neural Network Approach for Building An Obstacle Detection Model by Fusion of Proximity Sensors Data

    PubMed Central

    Peralta, Emmanuel; Vargas, Héctor; Hermosilla, Gabriel

    2018-01-01

    Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calibration process of this kind of sensor could be a time-consuming task because it is usually done by identification in a manual and repetitive way. The resulting obstacles detection models are usually nonlinear functions that can be different for each proximity sensor attached to the robot. In addition, the model is highly dependent on the type of sensor (e.g., ultrasonic or infrared), on changes in light intensity, and on the properties of the obstacle such as shape, colour, and surface texture, among others. That is why in some situations it could be useful to gather all the measurements provided by different kinds of sensor in order to build a unique model that estimates the distances to the obstacles around the robot. This paper presents a novel approach to get an obstacles detection model based on the fusion of sensors data and automatic calibration by using artificial neural networks. PMID:29495338

  20. The Impacts of Industrial Robots

    DTIC Science & Technology

    1981-11-01

    plastics, ’and strain gauges are used to measure very small forces at a number of points on the robot’s "end effector. Except for the simplest on-off...devices, tactile sensors are not yet found on commercially available robots. Forces are sensed by using strain gauges or piezoelectric sensors to...tools: deburring, drilling , grinding,milling,routing machines ii. plastic materialsformirg and injection machines iii. metal die casting machines iv

  1. Development of microsized slip sensors using dielectric elastomer for incipient slippage

    NASA Astrophysics Data System (ADS)

    Hwang, Do-Yeon; Kim, Baek-chul; Cho, Han-Jeong; Li, Zhengyuan; Lee, Youngkwan; Nam, Jae-Do; Moon, Hyungpil; Choi, Hyouk Ryeol; Koo, J. C.

    2014-04-01

    A humanoid robot hand has received significant attention in various fields of study. In terms of dexterous robot hand, slip detecting tactile sensor is essential to grasping objects safely. Moreover, slip sensor is useful in robotics and prosthetics to improve precise control during manipulation tasks. In this paper, sensor based-human biomimetic structure is fabricated. We reported a resistance tactile sensor that enables to detect a slip on the surface of sensor structure. The resistance slip sensor that the novel developed uses acrylonitrile-butadiene rubber (NBR) as a dielectric substrate and carbon particle as an electrode material. The presented sensor device in this paper has fingerprint-like structures that are similar with the role of the human's finger print. It is possible to measure the slip as the structure of sensor makes a deformation and it changes the resistance through forming a new conductive route. To verify effectiveness of the proposed slip detection, experiment using prototype of resistance slip sensor is conducted with an algorithm to detect slip and slip was successfully detected. In this paper, we will discuss the slip detection properties so four sensor and detection principle.

  2. Three-dimensional sensor system using multistripe laser and stereo camera for environment recognition of mobile robots

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.

    2002-10-01

    In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.

  3. Nap environment control considering respiration rate and music tempo by using sensor agent robot

    NASA Astrophysics Data System (ADS)

    Nakaso, Sayaka; Mita, Akira

    2015-03-01

    We propose a system that controls a nap environment considering respiration rates and music tempo by using a sensor agent robot. The proposed system consists of two sub-systems. The first sub-system measures respiration rates using optical flow. We conducted preparatory experiments to verify the accuracy of this sub-system. The experimental results showed that this sub-system can measure the respiration rates accurately despite several positional relationships. It was also shown that the accuracy could be affected by clothes, movements and light. The second sub-system we constructed was the music play sub-system that chooses music with the certain tempo corresponding to the respiration rates measured by the first sub-system. We conducted verification experiments to verify the effectiveness of this music play sub-system. The experimental results showed the effectiveness of varying music tempo based on the respiration rates in taking a nap. We also demonstrated this system in a real environment; a subject entered into the room being followed by ebioNα. When the subject was considered sleeping, ebioNα started measuring respiration rates, controlling music based on the respiration rates. As a result, we showed that this system could be realized. As a next step, we would like to improve this system to a nap environment control system to be used in offices. To realize this, we need to update the first sub-system measuring respiration rates by removing disturbances. We also need to upgrade music play sub-system considering the numbers of tunes, the kinds of music and time to change music.

  4. Relative hardness measurement of soft objects by a new fiber optic sensor

    NASA Astrophysics Data System (ADS)

    Ahmadi, Roozbeh; Ashtaputre, Pranav; Abou Ziki, Jana; Dargahi, Javad; Packirisamy, Muthukumaran

    2010-06-01

    The measurement of relative hardness of soft objects enables replication of human finger tactile perception capabilities. This ability has many applications not only in automation and robotics industry but also in many other areas such as aerospace and robotic surgery where a robotic tool interacts with a soft contact object. One of the practical examples of interaction between a solid robotic instrument and a soft contact object occurs during robotically-assisted minimally invasive surgery. Measuring the relative hardness of bio-tissue, while contacting the robotic instrument, helps the surgeons to perform this type of surgery more reliably. In the present work, a new optical sensor is proposed to measure the relative hardness of contact objects. In order to measure the hardness of a contact object, like a human finger, it is required to apply a small force/deformation to the object by a tactile sensor. Then, the applied force and resulting deformation should be recorded at certain points to enable the relative hardness measurement. In this work, force/deformation data for a contact object is recorded at certain points by the proposed optical sensor. Recorded data is used to measure the relative hardness of soft objects. Based on the proposed design, an experimental setup was developed and experimental tests were performed to measure the relative hardness of elastomeric materials. Experimental results verify the ability of the proposed optical sensor to measure the relative hardness of elastomeric samples.

  5. Using multiple sensors for printed circuit board insertion

    NASA Technical Reports Server (NTRS)

    Sood, Deepak; Repko, Michael C.; Kelley, Robert B.

    1989-01-01

    As more and more activities are performed in space, there will be a greater demand placed on the information handling capacity of people who are to direct and accomplish these tasks. A promising alternative to full-time human involvement is the use of semi-autonomous, intelligent robot systems. To automate tasks such as assembly, disassembly, repair and maintenance, the issues presented by environmental uncertainties need to be addressed. These uncertainties are introduced by variations in the computed position of the robot at different locations in its work envelope, variations in part positioning, and tolerances of part dimensions. As a result, the robot system may not be able to accomplish the desired task without the help of sensor feedback. Measurements on the environment allow real time corrections to be made to the process. A design and implementation of an intelligent robot system which inserts printed circuit boards into a card cage are presented. Intelligent behavior is accomplished by coupling the task execution sequence with information derived from three different sensors: an overhead three-dimensional vision system, a fingertip infrared sensor, and a six degree of freedom wrist-mounted force/torque sensor.

  6. Do infants perceive the social robot Keepon as a communicative partner?

    PubMed

    Peca, Andreea; Simut, Ramona; Cao, Hoang-Long; Vanderborght, Bram

    2016-02-01

    This study investigates if infants perceive an unfamiliar agent, such as the robot Keepon, as a social agent after observing an interaction between the robot and a human adult. 23 infants, aged 9-17 month, were exposed, in a first phase, to either a contingent interaction between the active robot and an active human adult, or to an interaction between an active human adult and the non-active robot, followed by a second phase, in which infants were offered the opportunity to initiate a turn-taking interaction with Keepon. The measured variables were: (1) the number of social initiations the infant directed toward the robot, and (2) the number of anticipatory orientations of attention to the agent that follows in the conversation. The results indicate a significant higher level of initiations in the interactive robot condition compared to the non-active robot condition, while the difference between the frequencies of anticipations of turn-taking behaviors was not significant. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Intelligent Autonomy for Unmanned Surface and Underwater Vehicles

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Woodward, Gail

    2011-01-01

    As the Autonomous Underwater Vehicle (AUV) and Autonomous Surface Vehicle (ASV) platforms mature in endurance and reliability, a natural evolution will occur towards longer, more remote autonomous missions. This evolution will require the development of key capabilities that allow these robotic systems to perform a high level of on-board decisionmaking, which would otherwise be performed by humanoperators. With more decision making capabilities, less a priori knowledge of the area of operations would be required, as these systems would be able to sense and adapt to changing environmental conditions, such as unknown topography, currents, obstructions, bays, harbors, islands, and river channels. Existing vehicle sensors would be dual-use; that is they would be utilized for the primary mission, which may be mapping or hydrographic reconnaissance; as well as for autonomous hazard avoidance, route planning, and bathymetric-based navigation. This paper describes a tightly integrated instantiation of an autonomous agent called CARACaS (Control Architecture for Robotic Agent Command and Sensing) developed at JPL (Jet Propulsion Laboratory) that was designed to address many of the issues for survivable ASV/AUV control and to provide adaptive mission capabilities. The results of some on-water tests with US Navy technology test platforms are also presented.

  8. A Review of Artificial Lateral Line in Sensor Fabrication and Bionic Applications for Robot Fish.

    PubMed

    Liu, Guijie; Wang, Anyi; Wang, Xinbao; Liu, Peng

    2016-01-01

    Lateral line is a system of sense organs that can aid fishes to maneuver in a dark environment. Artificial lateral line (ALL) imitates the structure of lateral line in fishes and provides invaluable means for underwater-sensing technology and robot fish control. This paper reviews ALL, including sensor fabrication and applications to robot fish. The biophysics of lateral line are first introduced to enhance the understanding of lateral line structure and function. The design and fabrication of an ALL sensor on the basis of various sensing principles are then presented. ALL systems are collections of sensors that include carrier and control circuit. Their structure and hydrodynamic detection are reviewed. Finally, further research trends and existing problems of ALL are discussed.

  9. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots.

    PubMed

    Strait, Megan K; Floerke, Victoria A; Ju, Wendy; Maddox, Keith; Remedios, Jessica D; Jung, Malte F; Urry, Heather L

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an "uncanny valley"-a phenomenon in which highly humanlike entities provoke aversion in human observers-has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task ( N agents = 60) to conduct an experimental test ( N participants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding-suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness.

  10. Obstacle negotiation control for a mobile robot suspended on overhead ground wires by optoelectronic sensors

    NASA Astrophysics Data System (ADS)

    Zheng, Li; Yi, Ruan

    2009-11-01

    Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.

  11. A small, cheap, and portable reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey

    2005-05-01

    While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.

  12. Learning Probabilistic Features for Robotic Navigation Using Laser Sensors

    PubMed Central

    Aznar, Fidel; Pujol, Francisco A.; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N 2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used. PMID:25415377

  13. Learning probabilistic features for robotic navigation using laser sensors.

    PubMed

    Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

  14. Self-evaluation on Motion Adaptation for Service Robots

    NASA Astrophysics Data System (ADS)

    Funabora, Yuki; Yano, Yoshikazu; Doki, Shinji; Okuma, Shigeru

    We suggest self motion evaluation method to adapt to environmental changes for service robots. Several motions such as walking, dancing, demonstration and so on are described with time series patterns. These motions are optimized with the architecture of the robot and under certain surrounding environment. Under unknown operating environment, robots cannot accomplish their tasks. We propose autonomous motion generation techniques based on heuristic search with histories of internal sensor values. New motion patterns are explored under unknown operating environment based on self-evaluation. Robot has some prepared motions which realize the tasks under the designed environment. Internal sensor values observed under the designed environment with prepared motions show the interaction results with the environment. Self-evaluation is composed of difference of internal sensor values between designed environment and unknown operating environment. Proposed method modifies the motions to synchronize the interaction results on both environment. New motion patterns are generated to maximize self-evaluation function without external information, such as run length, global position of robot, human observation and so on. Experimental results show that the possibility to adapt autonomously patterned motions to environmental changes.

  15. "I spy, with my little sensor": fair data handling practices for robots between privacy, copyright and security

    NASA Astrophysics Data System (ADS)

    Schafer, Burkhard; Edwards, Lilian

    2017-07-01

    The paper suggests an amendment to Principle 4 of ethical robot design, and a demand for "transparency by design". It argues that while misleading vulnerable users as to the nature of a robot is a serious ethical issue, other forms of intentionally deceptive or unintentionally misleading aspects of robotic design pose challenges that are on the one hand more universal and harmful in their application, on the other more difficult to address consistently through design choices. The focus will be on transparent design regarding the sensory capacities of robots. Intuitive, low-tech but highly efficient privacy preserving behaviour is regularly dependent on an accurate understanding of surveillance risks. Design choices that hide, camouflage or misrepresent these capacities can undermine these strategies. However, formulating an ethical principle of "sensor transparency" is not straightforward, as openness can also lead to greater vulnerability and with that security risks. We argue that the discussion on sensor transparency needs to be embedded in a broader discussion of "fair data handling principles" for robots that involve issues of privacy, but also intellectual property rights such as copyright.

  16. PADF RF localization experiments with multi-agent caged-MAV platforms

    NASA Astrophysics Data System (ADS)

    Barber, Christopher; Gates, Miguel; Selmic, Rastko; Al-Issa, Huthaifa; Ordonez, Raul; Mitra, Atindra

    2011-06-01

    This paper provides a summary of preliminary RF direction finding results generated within an AFOSR funded testbed facility recently developed at Louisiana Tech University. This facility, denoted as the Louisiana Tech University Micro- Aerial Vehicle/Wireless Sensor Network (MAVSeN) Laboratory, has recently acquired a number of state-of-the-art MAV platforms that enable us to analyze, design, and test some of our recent results in the area of multiplatform position-adaptive direction finding (PADF) [1] [2] for localization of RF emitters in challenging embedded multipath environments. Discussions within the segmented sections of this paper include a description of the MAVSeN Laboratory and the preliminary results from the implementation of mobile platforms with the PADF algorithm. This novel approach to multi-platform RF direction finding is based on the investigation of iterative path-loss based (i.e. path loss exponent) metrics estimates that are measured across multiple platforms in order to develop a control law that robotically/intelligently positionally adapt (i.e. self-adjust) the location of each distributed/cooperative platform. The body of this paper provides a summary of our recent results on PADF and includes a discussion on state-of-the-art Sensor Mote Technologies as applied towards the development of sensor-integrated caged-MAV platform for PADF applications. Also, a discussion of recent experimental results that incorporate sample approaches to real-time singleplatform data pruning is included as part of a discussion on potential approaches to refining a basic PADF technique in order to integrate and perform distributed self-sensitivity and self-consistency analysis as part of a PADF technique with distributed robotic/intelligent features. These techniques are extracted in analytical form from a parallel study denoted as "PADF RF Localization Criteria for Multi-Model Scattering Environments". The focus here is on developing and reporting specific approaches to self-sensitivity and self-consistency within this experimental PADF framework via the exploitation of specific single-agent caged-MAV trajectories that are unique to this experiment set.

  17. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots

    PubMed Central

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-01-01

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases. PMID:26528986

  18. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots.

    PubMed

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-10-30

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.

  19. A Decentralized Framework for Multi-Agent Robotic Systems

    PubMed Central

    2018-01-01

    Over the past few years, decentralization of multi-agent robotic systems has become an important research area. These systems do not depend on a central control unit, which enables the control and assignment of distributed, asynchronous and robust tasks. However, in some cases, the network communication process between robotic agents is overlooked, and this creates a dependency for each agent to maintain a permanent link with nearby units to be able to fulfill its goals. This article describes a communication framework, where each agent in the system can leave the network or accept new connections, sending its information based on the transfer history of all nodes in the network. To this end, each agent needs to comply with four processes to participate in the system, plus a fifth process for data transfer to the nearest nodes that is based on Received Signal Strength Indicator (RSSI) and data history. To validate this framework, we use differential robotic agents and a monitoring agent to generate a topological map of an environment with the presence of obstacles. PMID:29389849

  20. Exhaustive geographic search with mobile robots along space-filling curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spires, S.V.; Goldsmith, S.Y.

    1998-03-01

    Swarms of mobile robots can be tasked with searching a geographic region for targets of interest, such as buried land mines. The authors assume that the individual robots are equipped with sensors tuned to the targets of interest, that these sensors have limited range, and that the robots can communicate with one another to enable cooperation. How can a swarm of cooperating sensate robots efficiently search a given geographic region for targets in the absence of a priori information about the target`s locations? Many of the obvious approaches are inefficient or lack robustness. One efficient approach is to have themore » robots traverse a space-filling curve. For many geographic search applications, this method is energy-frugal, highly robust, and provides guaranteed coverage in a finite time that decreases as the reciprocal of the number of robots sharing the search task. Furthermore, it minimizes the amount of robot-to-robot communication needed for the robots to organize their movements. This report presents some preliminary results from applying the Hilbert space-filling curve to geographic search by mobile robots.« less

  1. Modelling and precision of the localization of the robotic mobile platforms for constructions with laser tracker and SmartTrack sensor

    NASA Astrophysics Data System (ADS)

    Dima, M.; Francu, C.

    2016-08-01

    This paper presents a way to expand the field of use of the laser tracker and SmartTrack sensor localization device used in lately for the localisation of the end effector of the industrial robots to the localization of the mobile construction robots. The research paper presents the equipment along with its characteristics, determines the relationships for the localization coordinates by comparison to the forward kinematics of the industrial robot's spherical arm (positioning mechanism in spherical coordinates) and the orientation mechanism with three revolute axes. In the end of the paper the accuracy of the mobile robot's localization is analysed.

  2. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    PubMed

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  3. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    PubMed Central

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  4. System Design and Locomotion of Superball, an Untethered Tensegrity Robot

    NASA Technical Reports Server (NTRS)

    Sabelhaus, Andrew P.; Bruce, Jonathan; Caluwaerts, Ken; Manovi, Pavlo; Firoozi, Roya Fallah; Dobi, Sarah; Agogino, Alice M.; Sunspiral, Vytas

    2015-01-01

    The Spherical Underactuated Planetary Exploration Robot ball (SUPERball) is an ongoing project within NASA Ames Research Center's Intelligent Robotics Group and the Dynamic Tensegrity Robotics Lab (DTRL). The current SUPERball is the first full prototype of this tensegrity robot platform, eventually destined for space exploration missions. This work, building on prior published discussions of individual components, presents the fully-constructed robot. Various design improvements are discussed, as well as testing results of the sensors and actuators that illustrate system performance. Basic low-level motor position controls are implemented and validated against sensor data, which show SUPERball to be uniquely suited for highly dynamic state trajectory tracking. Finally, SUPERball is shown in a simple example of locomotion. This implementation of a basic motion primitive shows SUPERball in untethered control.

  5. Cooperating mobile robots

    DOEpatents

    Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.; Byrne, Raymond H.

    2004-02-03

    A miniature mobile robot provides a relatively inexpensive mobile robot. A mobile robot for searching an area provides a way for multiple mobile robots in cooperating teams. A robotic system with a team of mobile robots communicating information among each other provides a way to locate a source in cooperation. A mobile robot with a sensor, a communication system, and a processor, provides a way to execute a strategy for searching an area.

  6. Sensitive and Flexible Polymeric Strain Sensor for Accurate Human Motion Monitoring

    PubMed Central

    Khan, Hassan; Kottapalli, Ajay; Asadnia, Mohsen

    2018-01-01

    Flexible electronic devices offer the capability to integrate and adapt with human body. These devices are mountable on surfaces with various shapes, which allow us to attach them to clothes or directly onto the body. This paper suggests a facile fabrication strategy via electrospinning to develop a stretchable, and sensitive poly (vinylidene fluoride) nanofibrous strain sensor for human motion monitoring. A complete characterization on the single PVDF nano fiber has been performed. The charge generated by PVDF electrospun strain sensor changes was employed as a parameter to control the finger motion of the robotic arm. As a proof of concept, we developed a smart glove with five sensors integrated into it to detect the fingers motion and transfer it to a robotic hand. Our results shows that the proposed strain sensors are able to detect tiny motion of fingers and successfully run the robotic hand. PMID:29389851

  7. Adaptation of sensor morphology: an integrative view of perception from biologically inspired robotics perspective

    PubMed Central

    Nurzaman, Surya G.

    2016-01-01

    Sensor morphology, the morphology of a sensing mechanism which plays a role of shaping the desired response from physical stimuli from surroundings to generate signals usable as sensory information, is one of the key common aspects of sensing processes. This paper presents a structured review of researches on bioinspired sensor morphology implemented in robotic systems, and discusses the fundamental design principles. Based on literature review, we propose two key arguments: first, owing to its synthetic nature, biologically inspired robotics approach is a unique and powerful methodology to understand the role of sensor morphology and how it can evolve and adapt to its task and environment. Second, a consideration of an integrative view of perception by looking into multidisciplinary and overarching mechanisms of sensor morphology adaptation across biology and engineering enables us to extract relevant design principles that are important to extend our understanding of the unfinished concepts in sensing and perception. PMID:27499843

  8. STARR: shortwave-targeted agile Raman robot for the detection and identification of emplaced explosives

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Gardner, Charles W.

    2014-05-01

    In order to combat the threat of emplaced explosives (land mines, etc.), ChemImage Sensor Systems (CISS) has developed a multi-sensor, robot mounted sensor capable of identification and confirmation of potential threats. The system, known as STARR (Shortwave-infrared Targeted Agile Raman Robot), utilizes shortwave infrared spectroscopy for the identification of potential threats, combined with a visible short-range standoff Raman hyperspectral imaging (HSI) system for material confirmation. The entire system is mounted onto a Talon UGV (Unmanned Ground Vehicle), giving the sensor an increased area search rate and reducing the risk of injury to the operator. The Raman HSI system utilizes a fiber array spectral translator (FAST) for the acquisition of high quality Raman chemical images, allowing for increased sensitivity and improved specificity. An overview of the design and operation of the system will be presented, along with initial detection results of the fusion sensor.

  9. On the Use of a Low-Cost Thermal Sensor to Improve Kinect People Detection in a Mobile Robot

    PubMed Central

    Susperregi, Loreto; Sierra, Basilio; Castrillón, Modesto; Lorenzo, Javier; Martínez-Otzeta, Jose María; Lazkano, Elena

    2013-01-01

    Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot or not. Our approach makes use of vision, depth and thermal sensors mounted on top of a mobile platform. The set of sensors is set up combining the rich data source offered by a Kinect sensor, which provides vision and depth at low cost, and a thermopile array sensor. Experimental results carried out with a mobile platform in a manufacturing shop floor and in a science museum have shown that the false positive rate achieved using any single cue is drastically reduced. The performance of our algorithm improves other well-known approaches, such as C4 and histogram of oriented gradients (HOG). PMID:24172285

  10. Mapping From an Instrumented Glove to a Robot Hand

    NASA Technical Reports Server (NTRS)

    Goza, Michael

    2005-01-01

    An algorithm has been developed to solve the problem of mapping from (1) a glove instrumented with joint-angle sensors to (2) an anthropomorphic robot hand. Such a mapping is needed to generate control signals to make the robot hand mimic the configuration of the hand of a human attempting to control the robot. The mapping problem is complicated by uncertainties in sensor locations caused by variations in sizes and shapes of hands and variations in the fit of the glove. The present mapping algorithm is robust in the face of these uncertainties, largely because it includes a calibration sub-algorithm that inherently adapts the mapping to the specific hand and glove, without need for measuring the hand and without regard for goodness of fit. The algorithm utilizes a forward-kinematics model of the glove derived from documentation provided by the manufacturer of the glove. In this case, forward-kinematics model signifies a mathematical model of the glove fingertip positions as functions of the sensor readings. More specifically, given the sensor readings, the forward-kinematics model calculates the glove fingertip positions in a Cartesian reference frame nominally attached to the palm. The algorithm also utilizes an inverse-kinematics model of the robot hand. In this case, inverse-kinematics model signifies a mathematical model of the robot finger-joint angles as functions of the robot fingertip positions. Again, more specifically, the inverse-kinematics model calculates the finger-joint commands needed to place the fingertips at specified positions in a Cartesian reference frame that is attached to the palm of the robot hand and that nominally corresponds to the Cartesian reference frame attached to the palm of the glove. Initially, because of the aforementioned uncertainties, the glove fingertip positions calculated by the forwardkinematics model in the glove Cartesian reference frame cannot be expected to match the robot fingertip positions in the robot-hand Cartesian reference frame. A calibration must be performed to make the glove and robot-hand fingertip positions correspond more precisely. The calibration procedure involves a few simple hand poses designed to provide well-defined fingertip positions. One of the poses is a fist. In each of the other poses, a finger touches the thumb. The calibration subalgorithm uses the sensor readings from these poses to modify the kinematical models to make the two sets of fingertip positions agree more closely.

  11. Development of an unmanned agricultural robotics system for measuring crop conditions for precision aerial application

    USDA-ARS?s Scientific Manuscript database

    An Unmanned Agricultural Robotics System (UARS) is acquired, rebuilt with desired hardware, and operated in both classrooms and field. The UARS includes crop height sensor, crop canopy analyzer, normalized difference vegetative index (NDVI) sensor, multispectral camera, and hyperspectral radiometer...

  12. Development of wrist rehabilitation robot and interface system.

    PubMed

    Yamamoto, Ikuo; Matsui, Miki; Inagawa, Naohiro; Hachisuka, Kenji; Wada, Futoshi; Hachisuka, Akiko; Saeki, Satoru

    2015-01-01

    The authors have developed a practical wrist rehabilitation robot for hemiplegic patients. It consists of a mechanical rotation unit, sensor, grip, and computer system. A myoelectric sensor is used to monitor the extensor carpi radialis longus/brevis muscle and flexor carpi radialis muscle activity during training. The training robot can provoke training through myoelectric sensors, a biological signal detector and processor in advance, so that patients can undergo effective training of extention and flexion in an excited condition. In addition, both-wrist system has been developed for mirror effect training, which is the most effective function of the system, so that autonomous training using both wrists is possible. Furthermore, a user-friendly screen interface with easily recognizable touch panels has been developed to give effective training for patients. The developed robot is small size and easy to carry. The developed aspiring interface system is effective to motivate the training of patients. The effectiveness of the robot system has been verified in hospital trails.

  13. Conference on Space and Military Applications of Automation and Robotics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.

  14. Pre-shaping of the Fingertip of Robot Hand Covered with Net Structure Proximity Sensor

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Suzuki, Yosuke; Hasegawa, Hiroaki; Ming, Aiguo; Ishikawa, Masatoshi; Shimojo, Makoto

    To achieve skillful tasks with multi-fingered robot hands, many researchers have been working on sensor-based control of them. Vision sensors and tactile sensors are indispensable for the tasks, however, the correctness of the information from the vision sensors decreases as a robot hand approaches to a grasping object because of occlusion. This research aims to achieve seamless detection for reliable grasp by use of proximity sensors: correcting the positional error of the hand in vision-based approach, and contacting the fingertip in the posture for effective tactile sensing. In this paper, we propose a method for adjusting the posture of the fingertip to the surface of the object. The method applies “Net-Structure Proximity Sensor” on the fingertip, which can detect the postural error in the roll and pitch axes between the fingertip and the object surface. The experimental result shows that the postural error is corrected in the both axes even if the object dynamically rotates.

  15. A Review of Artificial Lateral Line in Sensor Fabrication and Bionic Applications for Robot Fish

    PubMed Central

    Wang, Anyi; Wang, Xinbao; Liu, Peng

    2016-01-01

    Lateral line is a system of sense organs that can aid fishes to maneuver in a dark environment. Artificial lateral line (ALL) imitates the structure of lateral line in fishes and provides invaluable means for underwater-sensing technology and robot fish control. This paper reviews ALL, including sensor fabrication and applications to robot fish. The biophysics of lateral line are first introduced to enhance the understanding of lateral line structure and function. The design and fabrication of an ALL sensor on the basis of various sensing principles are then presented. ALL systems are collections of sensors that include carrier and control circuit. Their structure and hydrodynamic detection are reviewed. Finally, further research trends and existing problems of ALL are discussed. PMID:28115825

  16. On Navigation Sensor Error Correction

    NASA Astrophysics Data System (ADS)

    Larin, V. B.

    2016-01-01

    The navigation problem for the simplest wheeled robotic vehicle is solved by just measuring kinematical parameters, doing without accelerometers and angular-rate sensors. It is supposed that the steerable-wheel angle sensor has a bias that must be corrected. The navigation parameters are corrected using the GPS. The approach proposed regards the wheeled robot as a system with nonholonomic constraints. The performance of such a navigation system is demonstrated by way of an example

  17. The Performance Analysis of AN Indoor Mobile Mapping System with Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Tsai, G. J.; Chiang, K. W.; Chu, C. H.; Chen, Y. L.; El-Sheimy, N.; Habib, A.

    2015-08-01

    Over the years, Mobile Mapping Systems (MMSs) have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM). The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG) performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU), the Kinect RGB-D sensor and light detection, ranging (LIDAR) and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.

  18. Research of the master-slave robot surgical system with the function of force feedback.

    PubMed

    Shi, Yunyong; Zhou, Chaozheng; Xie, Le; Chen, Yongjun; Jiang, Jun; Zhang, Zhenfeng; Deng, Ze

    2017-12-01

    Surgical robots lack force feedback, which may lead to operation errors. In order to improve surgical outcomes, this research developed a new master-slave surgical robot, which was designed with an integrated force sensor. The new structure designed for the master-slave robot employs a force feedback mechanism. A six-dimensional force sensor was mounted on the tip of the slave robot's actuator. Sliding model control was adopted to control the slave robot. According to the movement of the master system manipulated by the surgeon, the slave's movement and the force feedback function were validated. The motion was completed, the standard deviation was calculated, and the force data were detected. Hence, force feedback was realized in the experiment. The surgical robot can help surgeons to complete trajectory motions with haptic sensation. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.

    PubMed

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-02-21

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.

  20. Proposal of Self-Learning and Recognition System of Facial Expression

    NASA Astrophysics Data System (ADS)

    Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko

    We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.

  1. Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW

    PubMed Central

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-01-01

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578

  2. An Intelligent Agent-Controlled and Robot-Based Disassembly Assistant

    NASA Astrophysics Data System (ADS)

    Jungbluth, Jan; Gerke, Wolfgang; Plapper, Peter

    2017-09-01

    One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate “the problem of loosening a connection and the distribution of the work” as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.

  3. Coordinated perception by teams of aerial and ground robots

    NASA Astrophysics Data System (ADS)

    Grocholsky, Benjamin P.; Swaminathan, Rahul; Kumar, Vijay; Taylor, Camillo J.; Pappas, George J.

    2004-12-01

    Air and ground vehicles exhibit complementary capabilities and characteristics as robotic sensor platforms. Fixed wing aircraft offer broad field of view and rapid coverage of search areas. However, minimum operating airspeed and altitude limits, combined with attitude uncertainty, place a lower limit on their ability to detect and localize ground features. Ground vehicles on the other hand offer high resolution sensing over relatively short ranges with the disadvantage of slow coverage. This paper presents a decentralized architecture and solution methodology for seamlessly realizing the collaborative potential of air and ground robotic sensor platforms. We provide a framework based on an established approach to the underlying sensor fusion problem. This provides transparent integration of information from heterogeneous sources. An information-theoretic utility measure captures the task objective and robot inter-dependencies. A simple distributed solution mechanism is employed to determine team member sensing trajectories subject to the constraints of individual vehicle and sensor sub-systems. The architecture is applied to a mission involving searching for and localizing an unknown number of targets in an user specified search area. Results for a team of two fixed wing UAVs and two all terrain UGVs equipped with vision sensors are presented.

  4. Handheld and mobile hyperspectral imaging sensors for wide-area standoff detection of explosives and chemical warfare agents

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Gardner, Charles W.; Nelson, Matthew P.

    2016-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the investigation and analysis of targets in complex background with a high degree of autonomy. HSI is beneficial for the detection of threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Two HSI techniques that have proven to be valuable are Raman and shortwave infrared (SWIR) HSI. Unfortunately, current generation HSI systems have numerous size, weight, and power (SWaP) limitations that make their potential integration onto a handheld or field portable platform difficult. The systems that are field-portable do so by sacrificing system performance, typically by providing an inefficient area search rate, requiring close proximity to the target for screening, and/or eliminating the potential to conduct real-time measurements. To address these shortcomings, ChemImage Sensor Systems (CISS) is developing a variety of wide-field hyperspectral imaging systems. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focused on sensor design and detection results.

  5. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    PubMed

    Chung, Michael Jae-Yoon; Friesen, Abram L; Fox, Dieter; Meltzoff, Andrew N; Rao, Rajesh P N

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  6. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning

    PubMed Central

    Chung, Michael Jae-Yoon; Friesen, Abram L.; Fox, Dieter; Meltzoff, Andrew N.; Rao, Rajesh P. N.

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration. PMID:26536366

  7. Task directed sensing

    NASA Technical Reports Server (NTRS)

    Firby, R. James

    1990-01-01

    High-level robot control research must confront the limitations imposed by real sensors if robots are to be controlled effectively in the real world. In particular, sensor limitations make it impossible to maintain a complete, detailed world model of the situation surrounding the robot. To address the problems involved in planning with the resulting incomplete and uncertain world models, traditional robot control architectures must be altered significantly. Task-directed sensing and control is suggested as a way of coping with world model limitations by focusing sensing and analysis resources on only those parts of the world relevant to the robot's active goals. The RAP adaptive execution system is used as an example of a control architecture designed to deploy sensing resources in this way to accomplish both action and knowledge goals.

  8. System of launchable mesoscale robots for distributed sensing

    NASA Astrophysics Data System (ADS)

    Yesin, Kemal B.; Nelson, Bradley J.; Papanikolopoulos, Nikolaos P.; Voyles, Richard M.; Krantz, Donald G.

    1999-08-01

    A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.

  9. Teleoperation System with Hybrid Pneumatic-Piezoelectric Actuation for MRI-Guided Needle Insertion with Haptic Feedback

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S.

    2014-01-01

    This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N. PMID:25126446

  10. Teleoperation System with Hybrid Pneumatic-Piezoelectric Actuation for MRI-Guided Needle Insertion with Haptic Feedback.

    PubMed

    Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S

    2013-01-01

    This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N.

  11. Accurate multi-robot targeting for keyhole neurosurgery based on external sensor monitoring.

    PubMed

    Comparetti, Mirko Daniele; Vaccarella, Alberto; Dyagilev, Ilya; Shoham, Moshe; Ferrigno, Giancarlo; De Momi, Elena

    2012-05-01

    Robotics has recently been introduced in surgery to improve intervention accuracy, to reduce invasiveness and to allow new surgical procedures. In this framework, the ROBOCAST system is an optically surveyed multi-robot chain aimed at enhancing the accuracy of surgical probe insertion during keyhole neurosurgery procedures. The system encompasses three robots, connected as a multiple kinematic chain (serial and parallel), totalling 13 degrees of freedom, and it is used to automatically align the probe onto a desired planned trajectory. The probe is then inserted in the brain, towards the planned target, by means of a haptic interface. This paper presents a new iterative targeting approach to be used in surgical robotic navigation, where the multi-robot chain is used to align the surgical probe to the planned pose, and an external sensor is used to decrease the alignment errors. The iterative targeting was tested in an operating room environment using a skull phantom, and the targets were selected on magnetic resonance images. The proposed targeting procedure allows about 0.3 mm to be obtained as the residual median Euclidean distance between the planned and the desired targets, thus satisfying the surgical accuracy requirements (1 mm), due to the resolution of the diffused medical images. The performances proved to be independent of the robot optical sensor calibration accuracy.

  12. EEG theta and Mu oscillations during perception of human and robot actions

    PubMed Central

    Urgen, Burcu A.; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P.

    2013-01-01

    The perception of others’ actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8–13 Hz) and frontal theta (4–8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other. PMID:24348375

  13. EEG theta and Mu oscillations during perception of human and robot actions.

    PubMed

    Urgen, Burcu A; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P

    2013-01-01

    The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.

  14. Vision servo of industrial robot: A review

    NASA Astrophysics Data System (ADS)

    Zhang, Yujin

    2018-04-01

    Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.

  15. Acoustic sensors on small robots for the urban environment

    NASA Astrophysics Data System (ADS)

    Young, Stuart H.; Scanlon, Michael V.

    2005-05-01

    As the Army transforms to the Future Force, particular attention must be paid to operations in Complex and Urban Terrain. Because our adversaries realize that we don't have battlefield dominance in the urban environment, and because population growth and migration to urban environments is still on the increase, our adversaries will continue to draw us into operations in the urban environment. The Army Research Laboratory (ARL) is developing technology to equip our soldiers for the urban operations of the future. Sophisticated small robotic platforms with diverse sensor suites will be an integral part of the Future Force, and must be able to collaborate not only amongst themselves but also with their manned partners. The use of acoustic sensors on robotic platforms, as shown in this paper, will greatly aid the soldiers of the future force in performing numerous types of missions including Reconnaissance, Surveillance, and Target Acquisition (RSTA) by providing situational awareness, particularly to the dismounted soldier operating in the urban environment. The work conducted by the Army Research Laboratory, discussed in this paper will be transitioned to the FCS-Small Unattended Ground Vehicle (SUGV) program and FFW. The Army Research Laboratory is already working with these programs to ensure a feasible migration path. This paper focuses on four areas relating to acoustic sensing on robots for the urban environment as demonstrated at the DoD Horizontal Fusion Portfolio"s Warriors Edge (WE) Quantum Leap II (QL II) demonstration at Ft Benning, GA in August, 2004: small (man-portable) robot detection, mule-sized robot detection, sensor fusion across multiple platforms, and soldier/robot team interaction.

  16. TU-AB-201-03: A Robot for the Automated Delivery of An Electromagnetic Tracking Sensor for the Localization of Brachytherapy Catheters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Don, S; Cormack, R; Viswanathan, A

    Purpose: To present a programmable robotic system for the accurate and fast deployment of an electromagnetic (EM) sensor for brachytherapy catheter localization. Methods: A robotic system for deployment of an EM sensor was designed and built. The system was programmed to increment the sensor position at specified time and space intervals. Sensor delivery accuracy was measured in a phantom using the localization of the EM sensor and tested in different environmental conditions. Accuracy was tested by measuring the distance between the physical locations reached by the sensor (measured by the EM tracker) and the intended programmed locations. Results: The systemmore » consisted of a stepper motor connected to drive wheels (that grip the cable to move the sensor) and a series of guides to connect to a brachytherapy transfer tube, all controlled by a programmable Arduino microprocessor. The total cost for parts was <$300. The positional accuracy of the sensor location was within 1 mm of the expected position provided by the motorized guide system. Acquisition speed to localize a brachytherapy catheter with 20 cm of active length was 10 seconds. The current design showed some cable slip and warping depending on environment temperature. Conclusion: The use of EM tracking for the localization of brachytherapy catheters has been previously demonstrated. Efficient data acquisition and artifact reduction requires fast and accurate deployment of an EM sensor in consistent, repeatable patterns, which cannot practically be achieved manually. The design of an inexpensive, programmable robot allowing for the precise deployment of stepping patterns was presented, and a prototype was built. Further engineering is necessary to ensure that the device provides efficient independent localization of brachytherapy catheters. This research was funded by the Kaye Family Award.« less

  17. Enhanced control & sensing for the REMOTEC ANDROS Mk VI robot. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spelt, P.F.; Harvey, H.W.

    1997-08-01

    This Cooperative Research and Development Agreement (CRADA) between Lockheed Marietta Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less

  18. Method for six-legged robot stepping on obstacles by indirect force estimation

    NASA Astrophysics Data System (ADS)

    Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun

    2016-07-01

    Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.

  19. CHIMERA II - A real-time multiprocessing environment for sensor-based robot control

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1989-01-01

    A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.

  20. A Tree Based Self-routing Scheme for Mobility Support in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kim, Young-Duk; Yang, Yeon-Mo; Kang, Won-Seok; Kim, Jin-Wook; An, Jinung

    Recently, WSNs (Wireless Sensor Networks) with mobile robot is a growing technology that offer efficient communication services for anytime and anywhere applications. However, the tiny sensor node has very limited network resources due to its low battery power, low data rate, node mobility, and channel interference constraint between neighbors. Thus, in this paper, we proposed a tree based self-routing protocol for autonomous mobile robots based on beacon mode and implemented in real test-bed environments. The proposed scheme offers beacon based real-time scheduling for reliable association process between parent and child nodes. In addition, it supports smooth handover procedure by reducing flooding overhead of control packets. Throughout the performance evaluation by using a real test-bed system and simulation, we illustrate that our proposed scheme demonstrates promising performance for wireless sensor networks with mobile robots.

  1. Package analysis of 3D-printed piezoresistive strain gauge sensors

    NASA Astrophysics Data System (ADS)

    Das, Sumit Kumar; Baptist, Joshua R.; Sahasrabuddhe, Ritvij; Lee, Woo H.; Popa, Dan O.

    2016-05-01

    Poly(3,4-ethyle- nedioxythiophene)-poly(styrenesulfonate) or PEDOT:PSS is a flexible polymer which exhibits piezo-resistive properties when subjected to structural deformation. PEDOT:PSS has a high conductivity and thermal stability which makes it an ideal candidate for use as a pressure sensor. Applications of this technology includes whole body robot skin that can increase the safety and physical collaboration of robots in close proximity to humans. In this paper, we present a finite element model of strain gauge touch sensors which have been 3D-printed onto Kapton and silicone substrates using Electro-Hydro-Dynamic ink-jetting. Simulations of the piezoresistive and structural model for the entire packaged sensor was carried out using COMSOLR , and compared with experimental results for validation. The model will be useful in designing future robot skin with predictable performances.

  2. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    PubMed Central

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  3. Serendipitous Offline Learning in a Neuromorphic Robot.

    PubMed

    Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg

    2016-01-01

    We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.

  4. INL Generic Robot Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The INL Generic Robot Architecture is a generic, extensible software framework that can be applied across a variety of different robot geometries, sensor suites and low-level proprietary control application programming interfaces (e.g. mobility, aria, aware, player, etc.).

  5. Estimating Tool–Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool

    PubMed Central

    Zhao, Baoliang; Nelson, Carl A.

    2016-01-01

    Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool–tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool–tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool–tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool–tissue interaction forces in real time, thereby increasing surgical efficiency and safety. PMID:27303591

  6. Controlling free flight of a robotic fly using an onboard vision sensor inspired by insect ocelli

    PubMed Central

    Fuller, Sawyer B.; Karpelson, Michael; Censi, Andrea; Ma, Kevin Y.; Wood, Robert J.

    2014-01-01

    Scaling a flying robot down to the size of a fly or bee requires advances in manufacturing, sensing and control, and will provide insights into mechanisms used by their biological counterparts. Controlled flight at this scale has previously required external cameras to provide the feedback to regulate the continuous corrective manoeuvres necessary to keep the unstable robot from tumbling. One stabilization mechanism used by flying insects may be to sense the horizon or Sun using the ocelli, a set of three light sensors distinct from the compound eyes. Here, we present an ocelli-inspired visual sensor and use it to stabilize a fly-sized robot. We propose a feedback controller that applies torque in proportion to the angular velocity of the source of light estimated by the ocelli. We demonstrate theoretically and empirically that this is sufficient to stabilize the robot's upright orientation. This constitutes the first known use of onboard sensors at this scale. Dipteran flies use halteres to provide gyroscopic velocity feedback, but it is unknown how other insects such as honeybees stabilize flight without these sensory organs. Our results, using a vehicle of similar size and dynamics to the honeybee, suggest how the ocelli could serve this role. PMID:24942846

  7. Estimating Tool-Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool.

    PubMed

    Zhao, Baoliang; Nelson, Carl A

    2016-10-01

    Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool-tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool-tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool-tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool-tissue interaction forces in real time, thereby increasing surgical efficiency and safety.

  8. Afocal optical flow sensor for reducing vertical height sensitivity in indoor robot localization and navigation.

    PubMed

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan

    2015-05-13

    This paper introduces a novel afocal optical flow sensor (OFS) system for odometry estimation in indoor robotic navigation. The OFS used in computer optical mouse has been adopted for mobile robots because it is not affected by wheel slippage. Vertical height variance is thought to be a dominant factor in systematic error when estimating moving distances in mobile robots driving on uneven surfaces. We propose an approach to mitigate this error by using an afocal (infinite effective focal length) system. We conducted experiments in a linear guide on carpet and three other materials with varying sensor heights from 30 to 50 mm and a moving distance of 80 cm. The same experiments were repeated 10 times. For the proposed afocal OFS module, a 1 mm change in sensor height induces a 0.1% systematic error; for comparison, the error for a conventional fixed-focal-length OFS module is 14.7%. Finally, the proposed afocal OFS module was installed on a mobile robot and tested 10 times on a carpet for distances of 1 m. The average distance estimation error and standard deviation are 0.02% and 17.6%, respectively, whereas those for a conventional OFS module are 4.09% and 25.7%, respectively.

  9. Modelling of robotic work cells using agent based-approach

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.

    2016-08-01

    In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.

  10. SafeNet: a methodology for integrating general-purpose unsafe devices in safe-robot rehabilitation systems.

    PubMed

    Vicentini, Federico; Pedrocchi, Nicola; Malosio, Matteo; Molinari Tosatti, Lorenzo

    2014-09-01

    Robot-assisted neurorehabilitation often involves networked systems of sensors ("sensory rooms") and powerful devices in physical interaction with weak users. Safety is unquestionably a primary concern. Some lightweight robot platforms and devices designed on purpose include safety properties using redundant sensors or intrinsic safety design (e.g. compliance and backdrivability, limited exchange of energy). Nonetheless, the entire "sensory room" shall be required to be fail-safe and safely monitored as a system at large. Yet, sensor capabilities and control algorithms used in functional therapies require, in general, frequent updates or re-configurations, making a safety-grade release of such devices hardly sustainable in cost-effectiveness and development time. As such, promising integrated platforms for human-in-the-loop therapies could not find clinical application and manufacturing support because of lacking in the maintenance of global fail-safe properties. Under the general context of cross-machinery safety standards, the paper presents a methodology called SafeNet for helping in extending the safety rate of Human Robot Interaction (HRI) systems using unsafe components, including sensors and controllers. SafeNet considers, in fact, the robotic system as a device at large and applies the principles of functional safety (as in ISO 13489-1) through a set of architectural procedures and implementation rules. The enabled capability of monitoring a network of unsafe devices through redundant computational nodes, allows the usage of any custom sensors and algorithms, usually planned and assembled at therapy planning-time rather than at platform design-time. A case study is presented with an actual implementation of the proposed methodology. A specific architectural solution is applied to an example of robot-assisted upper-limb rehabilitation with online motion tracking. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Autonomy in robots and other agents.

    PubMed

    Smithers, T

    1997-06-01

    The word "autonomous" has become widely used in artificial intelligence, robotics, and, more recently, artificial life and is typically used to qualify types of systems, agents, or robots: we see terms like "autonomous systems," "autonomous agents," and "autonomous robots." Its use in these fields is, however, both weak, with no distinctions being made that are not better and more precisely made with other existing terms, and varied, with no single underlying concept being involved. This ill-disciplined usage contrasts strongly with the use of the same term in other fields such as biology, philosophy, ethics, law, and human rights, for example. In all these quite different areas the concept of autonomy is essentially the same, though the language used and the aspects and issues of concern, of course, differ. In all these cases the underlying notion is one of self-law making and the closely related concept of self-identity. In this paper I argue that the loose and varied use of the term autonomous in artificial intelligence, robotics, and artificial life has effectively robbed these fields of an important concept. A concept essentially the same as we find it in biology, philosophy, ethics, and law, and one that is needed to distinguish a particular kind of agent or robot from those developed and built so far. I suggest that robots and other agents will have to be autonomous, i.e., self-law making, not just self-regulating, if they are to be able effectively to deal with the kinds of environments in which we live and work: environments which have significant large scale spatial and temporal invariant structure, but which also have large amounts of local spatial and temporal dynamic variation and unpredictability, and which lead to the frequent occurrence of previously unexperienced situations for the agents that interact with them.

  12. Fingertip-shaped optical tactile sensor for robotic applications

    NASA Technical Reports Server (NTRS)

    Begej, Stefan

    1988-01-01

    Progress is described regarding the development of a high-density, fiber-optic, fingertip-shaped tactile sensor specifically designed for application to dexterous robotics. The sensor operates on optical principles involving the frustration of total internal reflection at a waveguide/elastomer interface and generates a grey-scale tactile image that represents the normal forces of contact. The sensor contains 256 taxels (sensing sites) distributed in a dual-density pattern that includes a tactile fovea near the tip which measures 13 mm x 13 mm and contains 169 taxels. The details regarding the design and construction of this tactile sensor are presented, in addition to photographs of tactile imprints.

  13. Robot Position Sensor Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.

  14. Soft Pushing Operation with Dual Compliance Controllers Based on Estimated Torque and Visual Force

    NASA Astrophysics Data System (ADS)

    Muis, Abdul; Ohnishi, Kouhei

    Sensor fusion extends robot ability to perform more complex tasks. An interesting application in such an issue is pushing operation, in which through multi-sensor, the robot moves an object by pushing it. Generally, a pushing operation consists of “approaching, touching, and pushing"(1). However, most researches in this field are dealing with how the pushed object follows the predefined trajectory. In which, the implication as the robot body or the tool-tip hits an object is neglected. Obviously on collision, the robot momentum may crash sensor, robot's surface or even the object. For that reason, this paper proposes a soft pushing operation with dual compliance controllers. Mainly, a compliance control is a control system with trajectory compensation so that the external force may be followed. In this paper, the first compliance controller is driven by estimated external force based on reaction torque observer(2), which compensates contact sensation. The other one compensates non-contact sensation. Obviously, a contact sensation, acquired from force sensor either reaction torque observer of an object, is measurable once the robot touched the object. Therefore, a non-contact sensation is introduced before touching an object, which is realized with visual sensor in this paper. Here, instead of using visual information as command reference, the visual information such as depth, is treated as virtual force for the second compliance controller. Thus, having contact and non-contact sensation, the robot will be compliant with wider sensation. This paper considers a heavy mobile manipulator and a heavy object, which have significant momentum on touching stage. A chopstick is attached on the object side to show the effectiveness of the proposed method. Here, both compliance controllers adjust the mobile manipulator command reference to provide soft pushing operation. Finally, the experimental result shows the validity of the proposed method.

  15. Neural networks for satellite remote sensing and robotic sensor interpretation

    NASA Astrophysics Data System (ADS)

    Martens, Siegfried

    Remote sensing of forests and robotic sensor fusion can be viewed, in part, as supervised learning problems, mapping from sensory input to perceptual output. This dissertation develops ARTMAP neural networks for real-time category learning, pattern recognition, and prediction tailored to remote sensing and robotics applications. Three studies are presented. The first two use ARTMAP to create maps from remotely sensed data, while the third uses an ARTMAP system for sensor fusion on a mobile robot. The first study uses ARTMAP to predict vegetation mixtures in the Plumas National Forest based on spectral data from the Landsat Thematic Mapper satellite. While most previous ARTMAP systems have predicted discrete output classes, this project develops new capabilities for multi-valued prediction. On the mixture prediction task, the new network is shown to perform better than maximum likelihood and linear mixture models. The second remote sensing study uses an ARTMAP classification system to evaluate the relative importance of spectral and terrain data for map-making. This project has produced a large-scale map of remotely sensed vegetation in the Sierra National Forest. Network predictions are validated with ground truth data, and maps produced using the ARTMAP system are compared to a map produced by human experts. The ARTMAP Sierra map was generated in an afternoon, while the labor intensive expert method required nearly a year to perform the same task. The robotics research uses an ARTMAP system to integrate visual information and ultrasonic sensory information on a B14 mobile robot. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. ARTMAP effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.

  16. The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Han, Baoling

    2016-11-01

    The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.

  17. Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System

    NASA Astrophysics Data System (ADS)

    Oh, Sung J.; Hall, Ernest L.

    1987-01-01

    Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.

  18. On the Utilization of Social Animals as a Model for Social Robotics

    PubMed Central

    Miklósi, Ádám; Gácsi, Márta

    2012-01-01

    Social robotics is a thriving field in building artificial agents. The possibility to construct agents that can engage in meaningful social interaction with humans presents new challenges for engineers. In general, social robotics has been inspired primarily by psychologists with the aim of building human-like robots. Only a small subcategory of “companion robots” (also referred to as robotic pets) was built to mimic animals. In this opinion essay we argue that all social robots should be seen as companions and more conceptual emphasis should be put on the inter-specific interaction between humans and social robots. This view is underlined by the means of an ethological analysis and critical evaluation of present day companion robots. We suggest that human–animal interaction provides a rich source of knowledge for designing social robots that are able to interact with humans under a wide range of conditions. PMID:22457658

  19. A technical challenge for robot-assisted minimally invasive surgery: precision surgery on soft tissue.

    PubMed

    Stallkamp, J; Schraft, R D

    2005-01-01

    In minimally invasive surgery, a higher degree of accuracy is required by surgeons both for current and for future applications. This could be achieved using either a manipulator or a robot which would undertake selected tasks during surgery. However, a manually-controlled manipulator cannot fully exploit the maximum accuracy and feasibility of three-dimensional motion sequences. Therefore, apart from being used to perform simple positioning tasks, manipulators will probably be replaced by robot systems more and more in the future. However, in order to use a robot, accurate, up-to-date and extensive data is required which cannot yet be acquired by typical sensors such as CT, MRI, US or common x-ray machines. This paper deals with a new sensor and a concept for its application in robot-assisted minimally invasive surgery on soft tissue which could be a solution for data acquisition in future. Copyright 2005 Robotic Publications Ltd.

  20. SMARBot: a modular miniature mobile robot platform

    NASA Astrophysics Data System (ADS)

    Meng, Yan; Johnson, Kerry; Simms, Brian; Conforth, Matthew

    2008-04-01

    Miniature robots have many advantages over their larger counterparts, such as low cost, low power, and easy to build a large scale team for complex tasks. Heterogeneous multi miniature robots could provide powerful situation awareness capability due to different locomotion capabilities and sensor information. However, it would be expensive and time consuming to develop specific embedded system for different type of robots. In this paper, we propose a generic modular embedded system architecture called SMARbot (Stevens Modular Autonomous Robot), which consists of a set of hardware and software modules that can be configured to construct various types of robot systems. These modules include a high performance microprocessor, a reconfigurable hardware component, wireless communication, and diverse sensor and actuator interfaces. The design of all the modules in electrical subsystem, the selection criteria for module components, and the real-time operating system are described. Some proofs of concept experimental results are also presented.

  1. Method and apparatus for calibrating multi-axis load cells in a dexterous robot

    NASA Technical Reports Server (NTRS)

    Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)

    2012-01-01

    A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.

  2. Interactive robot control system and method of use

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Platt, Robert (Inventor); Reiland, Matthew J. (Inventor); Linn, Douglas Martin (Inventor)

    2012-01-01

    A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.

  3. Fault tolerant multi-sensor fusion based on the information gain

    NASA Astrophysics Data System (ADS)

    Hage, Joelle Al; El Najjar, Maan E.; Pomorski, Denis

    2017-01-01

    In the last decade, multi-robot systems are used in several applications like for example, the army, the intervention areas presenting danger to human life, the management of natural disasters, the environmental monitoring, exploration and agriculture. The integrity of localization of the robots must be ensured in order to achieve their mission in the best conditions. Robots are equipped with proprioceptive (encoders, gyroscope) and exteroceptive sensors (Kinect). However, these sensors could be affected by various faults types that can be assimilated to erroneous measurements, bias, outliers, drifts,… In absence of a sensor fault diagnosis step, the integrity and the continuity of the localization are affected. In this work, we present a muti-sensors fusion approach with Fault Detection and Exclusion (FDE) based on the information theory. In this context, we are interested by the information gain given by an observation which may be relevant when dealing with the fault tolerance aspect. Moreover, threshold optimization based on the quantity of information given by a decision on the true hypothesis is highlighted.

  4. Deodorant Characteristics of Breath Odor Occurred from Favorite Foods Using Metal Oxide Gas Sensors

    NASA Astrophysics Data System (ADS)

    Seto, Shuichi; Oyabu, Takashi; Cai, Kuiqian; Katsube, Teruaki

    Three types of metal oxide gas sensors were adopted to detect the degree of breath odor. Various sorts of information are included in the odor. Each sensor has different sensitivities to gaseous chemical substances and the sensitivities also differ according to human behaviors, for example taking a meal, teeth-brushing and drinking something. There is also a possibility that the sensor can detect degrees of daily fatigue. Sensor sensitivities were low for the expiration of the elderly when the subject drank green tea. In this study, it is thought that the odor system can be incorporated into a healing robot. The robot can communicate with the elderly using several words and also connect to Internet. As for the results, the robot can identify basic human behaviors and recognize the living conditions of the resident. Moreover, it can also execute a kind of information retrieval through the Internet. Therefore, it has healing capability for the aged, and can also receive and transmit information.

  5. Certainty grids for mobile robots

    NASA Technical Reports Server (NTRS)

    Moravec, H. P.

    1987-01-01

    A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.

  6. Knowledge assistant: A sensor fusion framework for robotic environmental characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feddema, J.T.; Rivera, J.J.; Tucker, S.D.

    1996-12-01

    A prototype sensor fusion framework called the {open_quotes}Knowledge Assistant{close_quotes} has been developed and tested on a gantry robot at Sandia National Laboratories. This Knowledge Assistant guides the robot operator during the planning, execution, and post analysis stages of the characterization process. During the planning stage, the Knowledge Assistant suggests robot paths and speeds based on knowledge of sensors available and their physical characteristics. During execution, the Knowledge Assistant coordinates the collection of data through a data acquisition {open_quotes}specialist.{close_quotes} During execution and post analysis, the Knowledge Assistant sends raw data to other {open_quotes}specialists,{close_quotes} which include statistical pattern recognition software, a neuralmore » network, and model-based search software. After the specialists return their results, the Knowledge Assistant consolidates the information and returns a report to the robot control system where the sensed objects and their attributes (e.g. estimated dimensions, weight, material composition, etc.) are displayed in the world model. This paper highlights the major components of this system.« less

  7. Improvement of the Owner Distinction Method for Healing-Type Pet Robots

    NASA Astrophysics Data System (ADS)

    Nambo, Hidetaka; Kimura, Haruhiko; Hara, Mirai; Abe, Koji; Tajima, Takuya

    In order to decrease human stress, Animal Assisted Therapy which applies pets to heal humans is attracted. However, since animals are insanitary and unsafe, it is difficult to practically apply animal pets in hospitals. For the reason, on behalf of animal pets, pet robots have been attracted. Since pet robots would have no problems in sanitation and safety, they are able to be applied as a substitute for animal pets in the therapy. In our previous study where pet robots distinguish their owners like an animal pet, we used a puppet type pet robot which has pressure type touch sensors. However, the accuracy of our method was not sufficient to practical use. In this paper, we propose a method to improve the accuracy of the distinction. The proposed method can be applied for capacitive touch sensors such as installed in AIBO in addition to pressure type touch sensors. Besides, this paper shows performance of the proposed method from experimental results and confirms the proposed method has improved performance of the distinction in the conventional method.

  8. A Fabry-Perot Interferometry Based MRI-Compatible Miniature Uniaxial Force Sensor for Percutaneous Needle Placement

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Furlong, Cosme; Fischer, Gregory S.

    2014-01-01

    Robot-assisted surgical procedures, taking advantage of the high soft tissue contrast and real-time imaging of magnetic resonance imaging (MRI), are developing rapidly. However, it is crucial to maintain tactile force feedback in MRI-guided needle-based procedures. This paper presents a Fabry-Perot interference (FPI) based system of an MRI-compatible fiber optic sensor which has been integrated into a piezoelectrically actuated robot for prostate cancer biopsy and brachytherapy in 3T MRI scanner. The opto-electronic sensing system design was minimized to fit inside an MRI-compatible robot controller enclosure. A flexure mechanism was designed that integrates the FPI sensor fiber for measuring needle insertion force, and finite element analysis was performed for optimizing the correct force-deformation relationship. The compact, low-cost FPI sensing system was integrated into the robot and calibration was conducted. The root mean square (RMS) error of the calibration among the range of 0–10 Newton was 0.318 Newton comparing to the theoretical model which has been proven sufficient for robot control and teleoperation. PMID:25126153

  9. Park Smart

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The Parking Garage Automation System (PGAS) is based on a technology developed by a NASA-sponsored project called Robot sensorSkin(TM). Merritt Systems, Inc., of Orlando, Florida, teamed up with NASA to improve robots working with critical flight hardware at Kennedy Space Center in Florida. The system, containing smart sensor modules and flexible printed circuit board skin, help robots to steer clear of obstacles using a proximity sensing system. Advancements in the sensor designs are being applied to various commercial applications, including the PGAS. The system includes a smartSensor(TM) network installed around and within public parking garages to autonomously guide motorists to open facilities, and once within, to free parking spaces. The sensors use non-invasive reflective-ultrasonic technology for high accuracy, high reliability, and low maintenance. The system is remotely programmable: it can be tuned to site-specific requirements, has variable range capability, and allows remote configuration, monitoring, and diagnostics. The sensors are immune to interference from metallic construction materials, such as rebar and steel beams. Inside the garage, smart routing signs mounted overhead or on poles in front of each row of parking spots guide the motorist precisely to free spaces.

  10. Localization and Mapping Using Only a Rotating FMCW Radar Sensor

    PubMed Central

    Vivet, Damien; Checchin, Paul; Chapuis, Roland

    2013-01-01

    Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicle's displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed. PMID:23567523

  11. Localization and mapping using only a rotating FMCW radar sensor.

    PubMed

    Vivet, Damien; Checchin, Paul; Chapuis, Roland

    2013-04-08

    Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicle's displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed.

  12. International Assessment of Unmanned Ground Vehicles

    DTIC Science & Technology

    2008-02-01

    research relevant to ground robotics include • Multi-sensor data fusion • Stereovision • Dedicated robots, including legged robots, tracked robots...Technology Laboratory has developed several mobile robots with leg - ged, wheeled, rolling, rowing, and hybrid locomotion. Areas of particular emphasis...117 UK Department of Trade and Industry ( DTI ) Global Watch Mission. November 2006. Mechatronics in Russia. 118 CRDI Web Site: http

  13. Perception for mobile robot navigation: A survey of the state of the art

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David

    1994-01-01

    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.

  14. Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manges, W.W.; Hamel, W.R.; Weisbin, C.R.

    1988-01-01

    The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less

  15. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System

    PubMed Central

    Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan

    2017-01-01

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964

  16. A Demonstrator Intelligent Scheduler For Sensor-Based Robots

    NASA Astrophysics Data System (ADS)

    Perrotta, Gabriella; Allen, Charles R.; Shepherd, Andrew J.

    1987-10-01

    The development of an execution module capable of functioning as as on-line supervisor for a robot equipped with a vision sensor and tactile sensing gripper system is described. The on-line module is supported by two off-line software modules which provide a procedural based assembly constraints language to allow the assembly task to be defined. This input is then converted into a normalised and minimised form. The host Robot programming language permits high level motions to be issued at the to level, hence allowing a low programming overhead to the designer, who must describe the assembly sequence. Components are selected for pick and place robot movement, based on information derived from two cameras, one static and the other mounted on the end effector of the robot. The approach taken is multi-path scheduling as described by Fox pi. The system is seen to permit robot assembly in a less constrained parts presentation environment making full use of the sensory detail available on the robot.

  17. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System.

    PubMed

    Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan

    2017-09-10

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

  18. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    PubMed Central

    Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro

    2014-01-01

    This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636

  19. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System.

    PubMed

    Wu, Defeng; Chen, Tianfei; Li, Aiguo

    2016-08-30

    A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  20. Fabrication of strain gauge based sensors for tactile skins

    NASA Astrophysics Data System (ADS)

    Baptist, Joshua R.; Zhang, Ruoshi; Wei, Danming; Saadatzi, Mohammad Nasser; Popa, Dan O.

    2017-05-01

    Fabricating cost effective, reliable and functional sensors for electronic skins has been a challenging undertaking for the last several decades. Application of such skins include haptic interfaces, robotic manipulation, and physical human-robot interaction. Much of our recent work has focused on producing compliant sensors that can be easily formed around objects to sense normal, tension, or shear forces. Our past designs have involved the use of flexible sensors and interconnects fabricated on Kapton substrates, and piezoresistive inks that are 3D printed using Electro Hydro Dynamic (EHD) jetting onto interdigitated electrode (IDE) structures. However, EHD print heads require a specialized nozzle and the application of a high-voltage electric field; for which, tuning process parameters can be difficult based on the choice of inks and substrates. Therefore, in this paper we explore sensor fabrication techniques using a novel wet lift-off photolithographic technique for patterning the base polymer piezoresistive material, specifically Poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) or PEDOT:PSS. Fabricated sensors are electrically and thermally characterized, and temperaturecompensated designs are proposed and validated. Packaging techniques for sensors in polymer encapsulants are proposed and demonstrated to produce a tactile interface device for a robot.

  1. Integrated High-Speed Torque Control System for a Robotic Joint

    NASA Technical Reports Server (NTRS)

    Davis, Donald R. (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Valvo, Michael C. (Inventor); Askew, R. Scott (Inventor)

    2013-01-01

    A control system for achieving high-speed torque for a joint of a robot includes a printed circuit board assembly (PCBA) having a collocated joint processor and high-speed communication bus. The PCBA may also include a power inverter module (PIM) and local sensor conditioning electronics (SCE) for processing sensor data from one or more motor position sensors. Torque control of a motor of the joint is provided via the PCBA as a high-speed torque loop. Each joint processor may be embedded within or collocated with the robotic joint being controlled. Collocation of the joint processor, PIM, and high-speed bus may increase noise immunity of the control system, and the localized processing of sensor data from the joint motor at the joint level may minimize bus cabling to and from each control node. The joint processor may include a field programmable gate array (FPGA).

  2. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots

    PubMed Central

    Strait, Megan K.; Floerke, Victoria A.; Ju, Wendy; Maddox, Keith; Remedios, Jessica D.; Jung, Malte F.; Urry, Heather L.

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an “uncanny valley”—a phenomenon in which highly humanlike entities provoke aversion in human observers—has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task (Nagents = 60) to conduct an experimental test (Nparticipants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding—suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness. PMID:28912736

  3. Enhanced control and sensing for the REMOTEC ANDROS Mk VI robot. CRADA final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spelt, P.F.; Harvey, H.W.

    1998-08-01

    This Cooperative Research and Development Agreement (CRADA) between Lockheed Martin Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less

  4. Autonomous caregiver following robotic wheelchair

    NASA Astrophysics Data System (ADS)

    Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary

    2011-12-01

    In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.

  5. Low Cost Multi-Sensor Robot Laser Scanning System and its Accuracy Investigations for Indoor Mapping Application

    NASA Astrophysics Data System (ADS)

    Chen, C.; Zou, X.; Tian, M.; Li, J.; Wu, W.; Song, Y.; Dai, W.; Yang, B.

    2017-11-01

    In order to solve the automation of 3D indoor mapping task, a low cost multi-sensor robot laser scanning system is proposed in this paper. The multiple-sensor robot laser scanning system includes a panorama camera, a laser scanner, and an inertial measurement unit and etc., which are calibrated and synchronized together to achieve simultaneously collection of 3D indoor data. Experiments are undertaken in a typical indoor scene and the data generated by the proposed system are compared with ground truth data collected by a TLS scanner showing an accuracy of 99.2% below 0.25 meter, which explains the applicability and precision of the system in indoor mapping applications.

  6. Application of the HeartLander Crawling Robot for Injection of a Thermally Sensitive Anti-Remodeling Agent for Myocardial Infarction Therapy

    PubMed Central

    Chapman, Michael P.; López González, Jose L.; Goyette, Brina E.; Fujimoto, Kazuro L.; Ma, Zuwei; Wagner, William R.; Zenati, Marco A.; Riviere, Cameron N.

    2011-01-01

    The injection of a mechanical bulking agent into the left ventricular (LV) wall of the heart has shown promise as a therapy for maladaptive remodeling of the myocardium after myocardial infarct (MI). The HeartLander robotic crawler presented itself as an ideal vehicle for minimally-invasive, highly accurate epicardial injection of such an agent. Use of the optimal bulking agent, a thermosetting hydrogel developed by our group, presents a number of engineering obstacles, including cooling of the miniaturized injection system while the robot is navigating in the warm environment of a living patient. We present herein a demonstration of an integrated miniature cooling and injection system in the HeartLander crawling robot, that is fully biocompatible and capable of multiple injections of a thermosetting hydrogel into dense animal tissue while the entire system is immersed in a 37°C water bath. PMID:21096276

  7. Mobile robots exploration through cnn-based reinforcement learning.

    PubMed

    Tai, Lei; Liu, Ming

    2016-01-01

    Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.

  8. Advanced wireless mobile collaborative sensing network for tactical and strategic missions

    NASA Astrophysics Data System (ADS)

    Xu, Hao

    2017-05-01

    In this paper, an advanced wireless mobile collaborative sensing network will be developed. Through properly combining wireless sensor network, emerging mobile robots and multi-antenna sensing/communication techniques, we could demonstrate superiority of developed sensing network. To be concrete, heterogeneous mobile robots including unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) are equipped with multi-model sensors and wireless transceiver antennas. Through real-time collaborative formation control, multiple mobile robots can team the best formation that can provide most accurate sensing results. Also, formatting multiple mobile robots can also construct a multiple-input multiple-output (MIMO) communication system that can provide a reliable and high performance communication network.

  9. Dynamics, control and sensor issues pertinent to robotic hands for the EVA retriever system

    NASA Technical Reports Server (NTRS)

    Mclauchlan, Robert A.

    1987-01-01

    Basic dynamics, sensor, control, and related artificial intelligence issues pertinent to smart robotic hands for the Extra Vehicular Activity (EVA) Retriever system are summarized and discussed. These smart hands are to be used as end effectors on arms attached to manned maneuvering units (MMU). The Retriever robotic systems comprised of MMU, arm and smart hands, are being developed to aid crewmen in the performance of routine EVA tasks including tool and object retrieval. The ultimate goal is to enhance the effectiveness of EVA crewmen.

  10. A Search-and-Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment

    PubMed Central

    Gao, Junyao; Zhao, Fangzhou; Liu, Yi

    2017-01-01

    This paper introduces a search-and-rescue robot system used for remote sensing of the underground coal mine environment, which is composed of an operating control unit and two mobile robots with explosion-proof and waterproof function. This robot system is designed to observe and collect information of the coal mine environment through remote control. Thus, this system can be regarded as a multifunction sensor, which realizes remote sensing. When the robot system detects danger, it will send out signals to warn rescuers to keep away. The robot consists of two gas sensors, two cameras, a two-way audio, a 1 km-long fiber-optic cable for communication and a mechanical explosion-proof manipulator. Especially, the manipulator is a novel explosion-proof manipulator for cleaning obstacles, which has 3-degree-of-freedom, but is driven by two motors. Furthermore, the two robots can communicate in series for 2 km with the operating control unit. The development of the robot system may provide a reference for developing future search-and-rescue systems. PMID:29065560

  11. The Structure, Design, and Closed-Loop Motion Control of a Differential Drive Soft Robot.

    PubMed

    Wu, Pang; Jiangbei, Wang; Yanqiong, Fei

    2018-02-01

    This article presents the structure, design, and motion control of an inchworm inspired pneumatic soft robot, which can perform differential movement. This robot mainly consists of two columns of pneumatic multi-airbags (actuators), one sensor, one baseboard, front feet, and rear feet. According to the different inflation time of left and right actuators, the robot can perform both linear and turning movements. The actuators of this robot are composed of multiple airbags, and the design of the airbags is analyzed. To deal with the nonlinear performance of the soft robot, we use radial basis function neural networks to train the turning ability of this robot on three different surfaces and create a mathematical model among coefficient of friction, deflection angle, and inflation time. Then, we establish the closed-loop automatic control model using three-axis electronic compass sensor. Finally, the automatic control model is verified by linear and turning movement experiments. According to the experiment, the robot can finish the linear and turning movements under the closed-loop control system.

  12. Caregivers' requirements for in-home robotic agent for supporting community-living elderly subjects with cognitive impairment.

    PubMed

    Faucounau, Véronique; Wu, Ya-Huei; Boulay, Mélodie; Maestrutti, Marina; Rigaud, Anne-Sophie

    2009-01-01

    Older people are an important and growing sector of the population. This demographic change raises the profile of frailty and disability within the world's population. In such conditions, many old people need aides to perform daily activities. Most of the support is given by family members who are now a new target in the therapeutic approach. With advances in technology, robotics becomes increasingly important as a means of supporting older people at home. In order to ensure appropriate technology, 30 caregivers filled out a self-administered questionnaire including questions on needs to support their proxy and requirements concerning the robotic agent's functions and modes of action. This paper points out the functions to be integrated into the robot in order to support caregivers in the care of their proxy. The results also show that caregivers have a positive attitude towards robotic agents.

  13. Robotics research projects report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsia, T.C.

    The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)

  14. Performance Evaluation of Intelligent Systems at the National Institute of Standards and Technology (NIST)

    DTIC Science & Technology

    2011-03-01

    past few years, including performance evaluation of emergency response robots , sensor systems on unmanned ground vehicles, speech-to-speech translation...emergency response robots ; intelligent systems; mixed palletizing, testing, simulation; robotic vehicle perception systems; search and rescue robots ...ranging from autonomous vehicles to urban search and rescue robots to speech translation and manufacturing systems. The evaluations have occurred in

  15. Aerial Explorers and Robotic Ecosystems

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Greg

    2004-01-01

    A unique bio-inspired approach to autonomous aerial vehicle, a.k.a. aerial explorer technology is discussed. The work is focused on defining and studying aerial explorer mission concepts, both as an individual robotic system and as a member of a small robotic "ecosystem." Members of this robotic ecosystem include the aerial explorer, air-deployed sensors and robotic symbiotes, and other assets such as rovers, landers, and orbiters.

  16. Sensor fusion III: 3-D perception and recognition; Proceedings of the Meeting, Boston, MA, Nov. 5-8, 1990

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1991-01-01

    The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.

  17. A tesselated probabilistic representation for spatial robot perception and navigation

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto

    1989-01-01

    The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations.

  18. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  19. Soft Ultrathin Electronics Innervated Adaptive Fully Soft Robots.

    PubMed

    Wang, Chengjun; Sim, Kyoseung; Chen, Jin; Kim, Hojin; Rao, Zhoulyu; Li, Yuhang; Chen, Weiqiu; Song, Jizhou; Verduzco, Rafael; Yu, Cunjiang

    2018-03-01

    Soft robots outperform the conventional hard robots on significantly enhanced safety, adaptability, and complex motions. The development of fully soft robots, especially fully from smart soft materials to mimic soft animals, is still nascent. In addition, to date, existing soft robots cannot adapt themselves to the surrounding environment, i.e., sensing and adaptive motion or response, like animals. Here, compliant ultrathin sensing and actuating electronics innervated fully soft robots that can sense the environment and perform soft bodied crawling adaptively, mimicking an inchworm, are reported. The soft robots are constructed with actuators of open-mesh shaped ultrathin deformable heaters, sensors of single-crystal Si optoelectronic photodetectors, and thermally responsive artificial muscle of carbon-black-doped liquid-crystal elastomer (LCE-CB) nanocomposite. The results demonstrate that adaptive crawling locomotion can be realized through the conjugation of sensing and actuation, where the sensors sense the environment and actuators respond correspondingly to control the locomotion autonomously through regulating the deformation of LCE-CB bimorphs and the locomotion of the robots. The strategy of innervating soft sensing and actuating electronics with artificial muscles paves the way for the development of smart autonomous soft robots. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Development of haptic system for surgical robot

    NASA Astrophysics Data System (ADS)

    Gang, Han Gyeol; Park, Jiong Min; Choi, Seung-Bok; Sohn, Jung Woo

    2017-04-01

    In this paper, a new type of haptic system for surgical robot application is proposed and its performances are evaluated experimentally. The proposed haptic system consists of an effective master device and a precision slave robot. The master device has 3-DOF rotational motion as same as human wrist motion. It has lightweight structure with a gyro sensor and three small-sized MR brakes for position measurement and repulsive torque generation, respectively. The slave robot has 3-DOF rotational motion using servomotors, five bar linkage and a torque sensor is used to measure resistive torque. It has been experimentally demonstrated that the proposed haptic system has good performances on tracking control of desired position and repulsive torque. It can be concluded that the proposed haptic system can be effectively applied to the surgical robot system in real field.

  1. 2015 Marine Corps Security Environment Forecast: Futures 2030-2045

    DTIC Science & Technology

    2015-01-01

    The technologies that make the iPhone “smart” were publically funded—the Internet, wireless networks, the global positioning system, microelectronics...Energy Revolution (63 percent);  Internet of Things (ubiquitous sensors embedded in interconnected computing devices) (50 percent);  “Sci-Fi...Neuroscience & artificial intelligence - Sensors /control systems -Power & energy -Human-robot interaction Robots/autonomous systems will become part of the

  2. Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)

    NASA Technical Reports Server (NTRS)

    Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey

    1990-01-01

    Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.

  3. The problem with multiple robots

    NASA Technical Reports Server (NTRS)

    Huber, Marcus J.; Kenny, Patrick G.

    1994-01-01

    The issues that can arise in research associated with multiple, robotic agents are discussed. Two particular multi-robot projects are presented as examples. This paper was written in the hope that it might ease the transition from single to multiple robot research.

  4. Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures

    PubMed Central

    Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra

    2010-01-01

    Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777

  5. Tactile Robotic Topographical Mapping Without Force or Contact Sensors

    NASA Technical Reports Server (NTRS)

    Burke, Kevin; Melko, Joseph; Krajewski, Joel; Cady, Ian

    2008-01-01

    A method of topographical mapping of a local solid surface within the range of motion of a robot arm is based on detection of contact between the surface and the end effector (the fixture or tool at the tip of the robot arm). The method was conceived to enable mapping of local terrain by an exploratory robot on a remote planet, without need to incorporate delicate contact switches, force sensors, a vision system, or other additional, costly hardware. The method could also be used on Earth for determining the size and shape of an unknown surface in the vicinity of a robot, perhaps in an unanticipated situation in which other means of mapping (e.g., stereoscopic imaging or laser scanning with triangulation) are not available. The method uses control software modified to utilize the inherent capability of the robotic control system to measure the joint positions, the rates of change of the joint positions, and the electrical current demanded by the robotic arm joint actuators. The system utilizes these coordinate data and the known robot-arm kinematics to compute the position and velocity of the end effector, move the end effector along a specified trajectory, place the end effector at a specified location, and measure the electrical currents in the joint actuators. Since the joint actuator current is approximately proportional to the actuator forces and torques, a sudden rise in joint current, combined with a slowing of the joint, is a possible indication of actuator stall and surface contact. Hence, even though the robotic arm is not equipped with contact sensors, it is possible to sense contact (albeit with reduced sensitivity) as the end effector becomes stalled against a surface that one seeks to measure.

  6. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    PubMed Central

    Gonzalez-de-Soto, Mariano; Pajares, Gonzalo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976

  7. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control

    NASA Astrophysics Data System (ADS)

    Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash

    2012-06-01

    This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.

  8. New trends in robotics for agriculture: integration and assessment of a real fleet of robots.

    PubMed

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  9. Global Coverage Measurement Planning Strategies for Mobile Robots Equipped with a Remote Gas Sensor

    PubMed Central

    Arain, Muhammad Asif; Trincavelli, Marco; Cirillo, Marcello; Schaffernicht, Erik; Lilienthal, Achim J.

    2015-01-01

    The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions. PMID:25803707

  10. Global coverage measurement planning strategies for mobile robots equipped with a remote gas sensor.

    PubMed

    Arain, Muhammad Asif; Trincavelli, Marco; Cirillo, Marcello; Schaffernicht, Erik; Lilienthal, Achim J

    2015-03-20

    The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions.

  11. RoboCup-Rescue: an international cooperative research project of robotics and AI for the disaster mitigation problem

    NASA Astrophysics Data System (ADS)

    Tadokoro, Satoshi; Kitano, Hiroaki; Takahashi, Tomoichi; Noda, Itsuki; Matsubara, Hitoshi; Shinjoh, Atsushi; Koto, Tetsuo; Takeuchi, Ikuo; Takahashi, Hironao; Matsuno, Fumitoshi; Hatayama, Mitsunori; Nobe, Jun; Shimada, Susumu

    2000-07-01

    This paper introduces the RoboCup-Rescue Simulation Project, a contribution to the disaster mitigation, search and rescue problem. A comprehensive urban disaster simulator is constructed on distributed computers. Heterogeneous intelligent agents such as fire fighters, victims and volunteers conduct search and rescue activities in this virtual disaster world. A real world interface integrates various sensor systems and controllers of infrastructures in the real cities with the real world. Real-time simulation is synchronized with actual disasters, computing complex relationship between various damage factors and agent behaviors. A mission-critical man-machine interface provides portability and robustness of disaster mitigation centers, and augmented-reality interfaces for rescue in real disasters. It also provides a virtual- reality training function for the public. This diverse spectrum of RoboCup-Rescue contributes to the creation of the safer social system.

  12. Design Principles for Rapid Prototyping Forces Sensors using 3D Printing.

    PubMed

    Kesner, Samuel B; Howe, Robert D

    2011-07-21

    Force sensors provide critical information for robot manipulators, manufacturing processes, and haptic interfaces. Commercial force sensors, however, are generally not adapted to specific system requirements, resulting in sensors with excess size, cost, and fragility. To overcome these issues, 3D printers can be used to create components for the quick and inexpensive development of force sensors. Limitations of this rapid prototyping technology, however, require specialized design principles. In this paper, we discuss techniques for rapidly developing simple force sensors, including selecting and attaching metal flexures, using inexpensive and simple displacement transducers, and 3D printing features to aid in assembly. These design methods are illustrated through the design and fabrication of a miniature force sensor for the tip of a robotic catheter system. The resulting force sensor prototype can measure forces with an accuracy of as low as 2% of the 10 N measurement range.

  13. CMMAD Usability Case Study in Support of Countermine and Hazard Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor G. Walker; David I. Gertman

    2010-04-01

    During field trials, operator usability data were collected in support of lane clearing missions and hazard sensing for two robot platforms with Robot Intelligence Kernel (RIK) software and sensor scanning payloads onboard. The tests featured autonomous and shared robot autonomy levels where tasking of the robot used a graphical interface featuring mine location and sensor readings. The goal of this work was to provide insights that could be used to further technology development. The efficacy of countermine systems in terms of mobility, search, path planning, detection, and localization were assessed. Findings from objective and subjective operator interaction measures are reviewedmore » along with commentary from soldiers having taken part in the study who strongly endorse the system.« less

  14. Insect-Inspired Optical-Flow Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven

    2005-01-01

    Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.

  15. Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements

    PubMed Central

    Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962

  16. Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.

    PubMed

    Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.

  17. Modeling and optimal design of an optical MEMS tactile sensor for use in robotically assisted surgery

    NASA Astrophysics Data System (ADS)

    Ahmadi, Roozbeh; Kalantari, Masoud; Packirisamy, Muthukumaran; Dargahi, Javad

    2010-06-01

    Currently, Minimally Invasive Surgery (MIS) performs through keyhole incisions using commercially available robotic surgery systems. One of the most famous examples of these robotic surgery systems is the da Vinci surgical system. In the current robotic surgery systems like the da Vinci, surgeons are faced with problems such as lack of tactile feedback during the surgery. Therefore, providing a real-time tactile feedback from interaction between surgical instruments and tissue can help the surgeons to perform MIS more reliably. The present paper proposes an optical tactile sensor to measure the contact force between the bio-tissue and the surgical instrument. A model is proposed for simulating the interaction between a flexible membrane and bio-tissue based on the finite element methods. The tissue is considered as a hyperelastic material with the material properties similar to the heart tissue. The flexible membrane is assumed as a thin layer of silicon which can be microfabricated using the technology of Micro Electro Mechanical Systems (MEMS). The simulation results are used to optimize the geometric design parameters of a proposed MEMS tactile sensor for use in robotic surgical systems to perform MIS.

  18. Calculating distance by wireless ethernet signal strength for global positioning method

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Yong; Kim, Jeehong; Lee, Chang-goo

    2005-12-01

    This paper investigated mobile robot localization by using wireless Ethernet for global localization and INS for relative localization. For relative localization, the low-cost INS features self-contained was adopted. Low-cost MEMS-based INS has a short-period response and acceptable performance. Generally, variety sensor was used for mobile robot localization. In spite of precise modeling of the sensor, it leads inevitably to the accumulation of errors. The IEEE802.11b wireless Ethernet standard has been deployed in office building, museums, hospitals, shopping centers and other indoor environments. Many mobile robots already make use of wireless networking for communication. So location sensing with wireless Ethernet might be very useful for a low-cost robot. This research used wireless Ethernet card for compensation the accumulation of errors. So the mobile robot can use that for global localization through the installed many IEEE802.11b wireless Ethernets in indoor environments. The chief difficulty in localization with wireless Ethernet is predicting signal strength. As a sensor, RF signal strength measured indoors is non-linear with distance. So, there made the profiles of signal strength for points and used that. We wrote using function between signal strength profile and distance from the wireless Ethernet point.

  19. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands.

    PubMed

    Sánchez-Durán, José A; Hidalgo-López, José A; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-08-19

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics.

  20. Towards the Verification of Human-Robot Teams

    NASA Technical Reports Server (NTRS)

    Fisher, Michael; Pearce, Edward; Wooldridge, Mike; Sierhuis, Maarten; Visser, Willem; Bordini, Rafael H.

    2005-01-01

    Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration.

  1. Progress in Insect-Inspired Optical Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Chahl, Javaan; Zometzer, Steve

    2005-01-01

    Progress has been made in continuing efforts to develop optical flight-control and navigation sensors for miniature robotic aircraft. The designs of these sensors are inspired by the designs and functions of the vision systems and brains of insects. Two types of sensors of particular interest are polarization compasses and ocellar horizon sensors. The basic principle of polarization compasses was described (but without using the term "polarization compass") in "Insect-Inspired Flight Control for Small Flying Robots" (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate: Bees use sky polarization patterns in ultraviolet (UV) light, caused by Rayleigh scattering of sunlight by atmospheric gas molecules, as direction references relative to the apparent position of the Sun. A robotic direction-finding technique based on this concept would be more robust in comparison with a technique based on the direction to the visible Sun because the UV polarization pattern is distributed across the entire sky and, hence, is redundant and can be extrapolated from a small region of clear sky in an elsewhere cloudy sky that hides the Sun.

  2. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  3. Investigation of human-robot interface performance in household environments

    NASA Astrophysics Data System (ADS)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  4. Floor Covering and Surface Identification for Assistive Mobile Robotic Real-Time Room Localization Application

    PubMed Central

    Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben

    2013-01-01

    Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification. PMID:24351647

  5. Floor covering and surface identification for assistive mobile robotic real-time room localization application.

    PubMed

    Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben

    2013-12-17

    Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification.

  6. Virtual Passive Controller for Robot Systems Using Joint Torque Sensors

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    This paper presents a control method based on virtual passive dynamic control that will stabilize a robot manipulator using joint torque sensors and a simple joint model. The method does not require joint position or velocity feedback for stabilization. The proposed control method is stable in the sense of Lyaponov. The control method was implemented on several joints of a laboratory robot. The controller showed good stability robustness to system parameter error and to the exclusion of nonlinear dynamic effects on the joints. The controller enhanced position tracking performance and, in the absence of position control, dissipated joint energy.

  7. Improving Mechanical Properties of Molded Silicone Rubber for Soft Robotics Through Fabric Compositing.

    PubMed

    Wang, Yue; Gregory, Cherry; Minor, Mark A

    2018-06-01

    Molded silicone rubbers are common in manufacturing of soft robotic parts, but they are often prone to tears, punctures, and tensile failures when strained. In this article, we present a fabric compositing method for improving the mechanical properties of soft robotic parts by creating a fabric/rubber composite that increases the strength and durability of the molded rubber. Comprehensive ASTM material tests evaluating the strength, tear resistance, and puncture resistance are conducted on multiple composites embedded with different fabrics, including polyester, nylon, silk, cotton, rayon, and several blended fabrics. Results show that strong fabrics increase the strength and durability of the composite, valuable in pneumatic soft robotic applications, while elastic fabrics maintain elasticity and enhance tear strength, suitable for robotic skins or soft strain sensors. Two case studies then validate the proposed benefits of the fabric compositing for soft robotic pressure vessel applications and soft strain sensor applications. Evaluations of the fabric/rubber composite samples and devices indicate that such methods are effective for improving mechanical properties of soft robotic parts, resulting in parts that can have customized stiffness, strength, and vastly improved durability.

  8. Sensor Data Fusion for Body State Estimation in a Bipedal Robot and Its Feedback Control Application for Stable Walking

    PubMed Central

    Chen, Ching-Pei; Chen, Jing-Yi; Huang, Chun-Kai; Lu, Jau-Ching; Lin, Pei-Chun

    2015-01-01

    We report on a sensor data fusion algorithm via an extended Kalman filter for estimating the spatial motion of a bipedal robot. Through fusing the sensory information from joint encoders, a 6-axis inertial measurement unit and a 2-axis inclinometer, the robot’s body state at a specific fixed position can be yielded. This position is also equal to the CoM when the robot is in the standing posture suggested by the detailed CAD model of the robot. In addition, this body state is further utilized to provide sensory information for feedback control on a bipedal robot with walking gait. The overall control strategy includes the proposed body state estimator as well as the damping controller, which regulates the body position state of the robot in real-time based on instant and historical position tracking errors. Moreover, a posture corrector for reducing unwanted torque during motion is addressed. The body state estimator and the feedback control structure are implemented in a child-size bipedal robot and the performance is experimentally evaluated. PMID:25734644

  9. A Flexible Temperature Sensor Based on Reduced Graphene Oxide for Robot Skin Used in Internet of Things.

    PubMed

    Liu, Guanyu; Tan, Qiulin; Kou, Hairong; Zhang, Lei; Wang, Jinqi; Lv, Wen; Dong, Helei; Xiong, Jijun

    2018-05-02

    Flexible electronics, which can be distributed on any surface we need, are highly demanded in the development of Internet of Things (IoT), robot technology and electronic skins. Temperature is a fundamental physical parameter, and it is an important indicator in many applications. Therefore, a flexible temperature sensor is required. Here, we report a simple method to fabricate three lightweight, low-cost and flexible temperature sensors, whose sensitive materials are reduced graphene oxide (r-GO), single-walled carbon nanotubes (SWCNTs) and multi-wall carbon nanotubes (MWCNTs). By comparing linearity, sensitive and repeatability, we found that the r-GO temperature sensor had the most balanced performance. Furthermore, the r-GO temperature sensor showed good mechanical properties and it could be bent in different angles with negligible resistance change. In addition, the performance of the r-GO temperature sensor remained stable under different kinds of pressure and was unaffected by surrounding environments, like humidity or other gases, because of the insulating layer on its sensitive layer. The easy-fabricated process and economy, together with the remarkable performance of the r-GO temperature sensor, suggest that it is suitable for use as a robot skin or used in the environment of IoT.

  10. On the development of a reactive sensor-based robotic system

    NASA Technical Reports Server (NTRS)

    Hexmoor, Henry H.; Underwood, William E., Jr.

    1989-01-01

    Flexible robotic systems for space applications need to use local information to guide their action in uncertain environments where the state of the environment and even the goals may change. They have to be tolerant of unexpected events and robust enough to carry their task to completion. Tactical goals should be modified while maintaining strategic goals. Furthermore, reactive robotic systems need to have a broader view of their environments than sensory-based systems. An architecture and a theory of representation extending the basic cycles of action and perception are described. This scheme allows for dynamic description of the environment and determining purposive and timely action. Applications of this scheme for assembly and repair tasks using a Universal Machine Intelligence RTX robot are being explored, but the ideas are extendable to other domains. The nature of reactivity for sensor-based robotic systems and implementation issues encountered in developing a prototype are discussed.

  11. Robotics and local fusion

    NASA Astrophysics Data System (ADS)

    Emmerman, Philip J.

    2005-05-01

    Teams of robots or mixed teams of warfighters and robots on reconnaissance and other missions can benefit greatly from a local fusion station. A local fusion station is defined here as a small mobile processor with interfaces to enable the ingestion of multiple heterogeneous sensor data and information streams, including blue force tracking data. These data streams are fused and integrated with contextual information (terrain features, weather, maps, dynamic background features, etc.), and displayed or processed to provide real time situational awareness to the robot controller or to the robots themselves. These blue and red force fusion applications remove redundancies, lessen ambiguities, correlate, aggregate, and integrate sensor information with context such as high resolution terrain. Applications such as safety, team behavior, asset control, training, pattern analysis, etc. can be generated or enhanced by these fusion stations. This local fusion station should also enable the interaction between these local units and a global information world.

  12. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  13. Structural design and output characteristic analysis of magnetostrictive tactile sensor for robotic applications

    NASA Astrophysics Data System (ADS)

    Zheng, Wendong; Wang, Bowen; Liu, Huaping; Li, Yunkai; Zhao, Ran; Weng, Ling; Zhang, Changgeng

    2018-05-01

    A novel magnetostrictive tactile sensor has been designed according to the transduction mechanism of cilia and Villari effect of iron-gallium alloy. The tactile sensor consists of a Galfenol beam, a pair of permanent magnets, a Hall sensor and a signal processing system. Compared with the conventional tactile sensor, our proposed tactile sensor can not only detect the contact-force, but also sense stiffness of an object. The performance and measurement range of tactile sensor have theoretically been analyzed and experimentally investigated. The results have revealed that the sensibility of tactile sensor for sensing force is up to 22.81mV/N at applied bias magnetic field of 2.56kA/m. Moreover, the sensor can effectively discriminate objects with different stiffness. The sensor is characterized by high sensitivity, good linearity, and quick response. It has the potential of being miniaturized and integrated into the finger of a robotic hand to realize force sensing and object recognition in real-time.

  14. Study on robot motion control for intelligent welding processes based on the laser tracking sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Wang, Qian; Tang, Chen; Wang, Ju

    2017-06-01

    A robot motion control method is presented for intelligent welding processes of complex spatial free-form curve seams based on the laser tracking sensor. First, calculate the tip position of the welding torch according to the velocity of the torch and the seam trajectory detected by the sensor. Then, search the optimal pose of the torch under constraints using genetic algorithms. As a result, the intersection point of the weld seam and the laser plane of the sensor is within the detectable range of the sensor. Meanwhile, the angle between the axis of the welding torch and the tangent of the weld seam meets the requirements. The feasibility of the control method is proved by simulation.

  15. Mathematical Modeling Of The Terrain Around A Robot

    NASA Technical Reports Server (NTRS)

    Slack, Marc G.

    1992-01-01

    In conceptual system for modeling of terrain around autonomous mobile robot, representation of terrain used for control separated from representation provided by sensors. Concept takes motion-planning system out from under constraints imposed by discrete spatial intervals of square terrain grid(s). Separation allows sensing and motion-controlling systems to operate asynchronously; facilitating integration of new map and sensor data into planning of motions.

  16. Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie

    2003-01-01

    A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.

  17. Robotic Design Studio: Exploring the Big Ideas of Engineering in a Liberal Arts Environment.

    ERIC Educational Resources Information Center

    Turbak, Franklyn; Berg, Robbie

    2002-01-01

    Suggests that it is important to introduce liberal arts students to the essence of engineering. Describes Robotic Design Studio, a course in which students learn how to design, assemble, and program robots made out of LEGO parts, sensors, motors, and small embedded computers. Represents an alternative vision of how robot design can be used to…

  18. Comparison of the LEGO Mindstorms NXT and EV3 Robotics Education Platforms

    ERIC Educational Resources Information Center

    Sherrard, Ann; Rhodes, Amy

    2014-01-01

    The release of the latest LEGO Mindstorms EV3 robotics platform in September 2013 has provided a dilemma for many youth robotics leaders. There is a need to understand the differences in the Mindstorms NXT and EV3 in order to make future robotics purchases. In this article the differences are identified regarding software, hardware, sensors, the…

  19. Execution monitoring for a mobile robot system

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1990-01-01

    Due to sensor errors, uncertainty, incomplete knowledge, and a dynamic world, robot plans will not always be executed exactly as planned. This paper describes an implemented robot planning system that enhances the traditional sense-think-act cycle in ways that allow the robot system monitor its behavior and react in emergencies in real-time. A proposal on how robot systems can completely break away from the traditional three-step cycle is also made.

  20. A reconfigurable computing platform for plume tracking with mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Kim, Byung Hwa; D'Souza, Colin; Voyles, Richard M.; Hesch, Joel; Roumeliotis, Stergios I.

    2006-05-01

    Much work has been undertaken recently toward the development of low-power, high-performance sensor networks. There are many static remote sensing applications for which this is appropriate. The focus of this development effort is applications that require higher performance computation, but still involve severe constraints on power and other resources. Toward that end, we are developing a reconfigurable computing platform for miniature robotic and human-deployed sensor systems composed of several mobile nodes. The system provides static and dynamic reconfigurability for both software and hardware by the combination of CPU (central processing unit) and FPGA (field-programmable gate array) allowing on-the-fly reprogrammability. Static reconfigurability of the hardware manifests itself in the form of a "morphing bus" architecture that permits the modular connection of various sensors with no bus interface logic. Dynamic hardware reconfigurability provides for the reallocation of hardware resources at run-time as the mobile, resource-constrained nodes encounter unknown environmental conditions that render various sensors ineffective. This computing platform will be described in the context of work on chemical/biological/radiological plume tracking using a distributed team of mobile sensors. The objective for a dispersed team of ground and/or aerial autonomous vehicles (or hand-carried sensors) is to acquire measurements of the concentration of the chemical agent from optimal locations and estimate its source and spread. This requires appropriate distribution, coordination and communication within the team members across a potentially unknown environment. The key problem is to determine the parameters of the distribution of the harmful agent so as to use these values for determining its source and predicting its spread. The accuracy and convergence rate of this estimation process depend not only on the number and accuracy of the sensor measurements but also on their spatial distribution over time (the sampling strategy). For the safety of a human-deployed distribution of sensors, optimized trajectories to minimize human exposure are also of importance. The systems described in this paper are currently being developed. Parts of the system are already in existence and some results from these are described.

  1. Model-free learning on robot kinematic chains using a nested multi-agent topology

    NASA Astrophysics Data System (ADS)

    Karigiannis, John N.; Tzafestas, Costas S.

    2016-11-01

    This paper proposes a model-free learning scheme for the developmental acquisition of robot kinematic control and dexterous manipulation skills. The approach is based on a nested-hierarchical multi-agent architecture that intuitively encapsulates the topology of robot kinematic chains, where the activity of each independent degree-of-freedom (DOF) is finally mapped onto a distinct agent. Each one of those agents progressively evolves a local kinematic control strategy in a game-theoretic sense, that is, based on a partial (local) view of the whole system topology, which is incrementally updated through a recursive communication process according to the nested-hierarchical topology. Learning is thus approached not through demonstration and training but through an autonomous self-exploration process. A fuzzy reinforcement learning scheme is employed within each agent to enable efficient exploration in a continuous state-action domain. This paper constitutes in fact a proof of concept, demonstrating that global dexterous manipulation skills can indeed evolve through such a distributed iterative learning of local agent sensorimotor mappings. The main motivation behind the development of such an incremental multi-agent topology is to enhance system modularity, to facilitate extensibility to more complex problem domains and to improve robustness with respect to structural variations including unpredictable internal failures. These attributes of the proposed system are assessed in this paper through numerical experiments in different robot manipulation task scenarios, involving both single and multi-robot kinematic chains. The generalisation capacity of the learning scheme is experimentally assessed and robustness properties of the multi-agent system are also evaluated with respect to unpredictable variations in the kinematic topology. Furthermore, these numerical experiments demonstrate the scalability properties of the proposed nested-hierarchical architecture, where new agents can be recursively added in the hierarchy to encapsulate individual active DOFs. The results presented in this paper demonstrate the feasibility of such a distributed multi-agent control framework, showing that the solutions which emerge are plausible and near-optimal. Numerical efficiency and computational cost issues are also discussed.

  2. New robotics: design principles for intelligent systems.

    PubMed

    Pfeifer, Rolf; Iida, Fumiya; Bongard, Josh

    2005-01-01

    New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e. g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only "nice to have" but is in fact a necessary tool for designing embodied agents.

  3. Neuro-Inspired Spike-Based Motion: From Dynamic Vision Sensor to Robot Motor Open-Loop Control through Spike-VITE

    PubMed Central

    Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan

    2013-01-01

    In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330

  4. Using arm and hand gestures to command robots during stealth operations

    NASA Astrophysics Data System (ADS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-06-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-offreedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  5. Neuro-inspired spike-based motion: from dynamic vision sensor to robot motor open-loop control through spike-VITE.

    PubMed

    Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan

    2013-11-20

    In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation.

  6. Using Arm and Hand Gestures to Command Robots during Stealth Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-01-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  7. Human-Robot Teaming in a Multi-Agent Space Assembly Task

    NASA Technical Reports Server (NTRS)

    Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher

    2004-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower. An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of humans with the survivability and physical capabilities of robots is proposed and illustrated by example. Such teams are useful for large-scale, complex missions requiring dispersed manipulation, locomotion and sensing capabilities. To study collaboration modalities within a multi-agent EVA team, a 1-g test is conducted with humans and robots working together in various supporting roles.

  8. Online Phase Detection Using Wearable Sensors for Walking with a Robotic Prosthesis

    PubMed Central

    Goršič, Maja; Kamnik, Roman; Ambrožič, Luka; Vitiello, Nicola; Lefeber, Dirk; Pasquini, Guido; Munih, Marko

    2014-01-01

    This paper presents a gait phase detection algorithm for providing feedback in walking with a robotic prosthesis. The algorithm utilizes the output signals of a wearable wireless sensory system incorporating sensorized shoe insoles and inertial measurement units attached to body segments. The principle of detecting transitions between gait phases is based on heuristic threshold rules, dividing a steady-state walking stride into four phases. For the evaluation of the algorithm, experiments with three amputees, walking with the robotic prosthesis and wearable sensors, were performed. Results show a high rate of successful detection for all four phases (the average success rate across all subjects >90%). A comparison of the proposed method to an off-line trained algorithm using hidden Markov models reveals a similar performance achieved without the need for learning dataset acquisition and previous model training. PMID:24521944

  9. Summary of workshop on the application of VLSI for robotic sensing

    NASA Technical Reports Server (NTRS)

    Brooks, T.; Wilcox, B.

    1984-01-01

    It was one of the objectives of the considered workshop to identify near, mid, and far-term applications of VLSI for robotic sensing and sensor data preprocessing. The workshop was also to indicate areas in which VLSI technology can provide immediate and future payoffs. A third objective is related to the promotion of dialog and collaborative efforts between research communities, industry, and government. The workshop was held on March 24-25, 1983. Conclusions and recommendations are discussed. Attention is given to the need for a pixel correction chip, an image sensor with 10,000 dynamic range, VLSI enhanced architectures, the need for a high-density serpentine memory, an LSI-tactile sensing program, an analog-signal preprocessor chip, a smart strain gage, a protective proximity envelope, a VLSI-proximity sensor program, a robot-net chip, and aspects of silicon micromechanics.

  10. Technology for robotic surface inspection in space

    NASA Technical Reports Server (NTRS)

    Volpe, Richard; Balaram, J.

    1994-01-01

    This paper presents on-going research in robotic inspection of space platforms. Three main areas of investigation are discussed: machine vision inspection techniques, an integrated sensor end-effector, and an orbital environment laboratory simulation. Machine vision inspection utilizes automatic comparison of new and reference images to detect on-orbit induced damage such as micrometeorite impacts. The cameras and lighting used for this inspection are housed in a multisensor end-effector, which also contains a suite of sensors for detection of temperature, gas leaks, proximity, and forces. To fully test all of these sensors, a realistic space platform mock-up has been created, complete with visual, temperature, and gas anomalies. Further, changing orbital lighting conditions are effectively mimicked by a robotic solar simulator. In the paper, each of these technology components will be discussed, and experimental results are provided.

  11. Stretchable, Flexible, Scalable Smart Skin Sensors for Robotic Position and Force Estimation.

    PubMed

    O'Neill, John; Lu, Jason; Dockter, Rodney; Kowalewski, Timothy

    2018-03-23

    The design and validation of a continuously stretchable and flexible skin sensor for collaborative robotic applications is outlined. The skin consists of a PDMS skin doped with Carbon Nanotubes and the addition of conductive fabric, connected by only five wires to a simple microcontroller. The accuracy is characterized in position as well as force, and the skin is also tested under uniaxial stretch. There are also two examples of practical implementations in collaborative robotic applications. The stationary position estimate has an RMSE of 7.02 mm, and the sensor error stays within 2.5 ± 1.5 mm even under stretch. The skin consistently provides an emergency stop command at only 0.5 N of force and is shown to maintain a collaboration force of 10 N in a collaborative control experiment.

  12. Plume-tracking robots: a new application of chemical sensors.

    PubMed

    Ishid, H; Nakamoto, T; Moriizumi, T; Kikas, T; Janata, J

    2001-04-01

    Many animals have the ability to search for odor sources by tracking their plumes. Some of the key features of this search behavior have been successfully transferred to robot platforms, although the capabilities of animals are still beyond the current level of sensor technologies. The examples described in this paper are (1) incorporating into a wheeled robot the upwind surges and casting used by moths in tracking pheromone plumes, (2) extracting useful information from the response patterns of a chemical sensor array patterned after the spatially distributed chemoreceptors of some animals, and (3) mimicking the fanning behavior of silkworm moths to enhance the reception of chemical signals by drawing molecules from one direction. The achievements so far and current efforts are reviewed to illustrate the steps to be taken toward future development of this technology.

  13. Adaptive multisensor fusion for planetary exploration rovers

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri

    1992-01-01

    The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.

  14. Control Architecture for Robotic Agent Command and Sensing

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel

    2008-01-01

    Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).

  15. Robots with a sense of touch

    NASA Astrophysics Data System (ADS)

    Bartolozzi, Chiara; Natale, Lorenzo; Nori, Francesco; Metta, Giorgio

    2016-09-01

    Tactile sensors provide robots with the ability to interact with humans and the environment with great accuracy, yet technical challenges remain for electronic-skin systems to reach human-level performance.

  16. Minimalistic optic flow sensors applied to indoor and outdoor visual guidance and odometry on a car-like robot.

    PubMed

    Mafrica, Stefano; Servel, Alain; Ruffier, Franck

    2016-11-10

    Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual  guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M 2 APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor.

  17. Friendship with a robot: Children's perception of similarity between a robot's physical and virtual embodiment that supports diabetes self-management.

    PubMed

    Sinoo, Claudia; van der Pal, Sylvia; Blanson Henkemans, Olivier A; Keizer, Anouk; Bierman, Bert P B; Looije, Rosemarijn; Neerincx, Mark A

    2018-07-01

    The PAL project develops a conversational agent with a physical (robot) and virtual (avatar) embodiment to support diabetes self-management of children ubiquitously. This paper assesses 1) the effect of perceived similarity between robot and avatar on children's' friendship towards the avatar, and 2) the effect of this friendship on usability of a self-management application containing the avatar (a) and children's motivation to play with it (b). During a four-day diabetes camp in the Netherlands, 21 children participated in interactions with both agent embodiments. Questionnaires measured perceived similarity, friendship, motivation to play with the app and its usability. Children felt stronger friendship towards the physical robot than towards the avatar. The more children perceived the robot and its avatar as the same agency, the stronger their friendship with the avatar was. The stronger their friendship with the avatar, the more they were motivated to play with the app and the higher the app scored on usability. The combination of physical and virtual embodiments seems to provide a unique opportunity for building ubiquitous long-term child-agent friendships. an avatar complementing a physical robot in health care could increase children's motivation and adherence to use self-management support systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Macrobend optical sensing for pose measurement in soft robot arms

    NASA Astrophysics Data System (ADS)

    Sareh, Sina; Noh, Yohan; Li, Min; Ranzani, Tommaso; Liu, Hongbin; Althoefer, Kaspar

    2015-12-01

    This paper introduces a pose-sensing system for soft robot arms integrating a set of macrobend stretch sensors. The macrobend sensory design in this study consists of optical fibres and is based on the notion that bending an optical fibre modulates the intensity of the light transmitted through the fibre. This sensing method is capable of measuring bending, elongation and compression in soft continuum robots and is also applicable to wearable sensing technologies, e.g. pose sensing in the wrist joint of a human hand. In our arrangement, applied to a cylindrical soft robot arm, the optical fibres for macrobend sensing originate from the base, extend to the tip of the arm, and then loop back to the base. The connectors that link the fibres to the necessary opto-electronics are all placed at the base of the arm, resulting in a simplified overall design. The ability of this custom macrobend stretch sensor to flexibly adapt its configuration allows preserving the inherent softness and compliance of the robot which it is installed on. The macrobend sensing system is immune to electrical noise and magnetic fields, is safe (because no electricity is needed at the sensing site), and is suitable for modular implementation in multi-link soft continuum robotic arms. The measurable light outputs of the proposed stretch sensor vary due to bend-induced light attenuation (macrobend loss), which is a function of the fibre bend radius as well as the number of repeated turns. The experimental study conducted as part of this research revealed that the chosen bend radius has a far greater impact on the measured light intensity values than the number of turns (if greater than five). Taking into account that the bend radius is the only significantly influencing design parameter, the macrobend stretch sensors were developed to create a practical solution to the pose sensing in soft continuum robot arms. Henceforward, the proposed sensing design was benchmarked against an electromagnetic tracking system (NDI Aurora) for validation.

  19. Computing Optic Flow with ArduEye Vision Sensor

    DTIC Science & Technology

    2013-01-01

    processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the

  20. Command Recognition of Robot with Low Dimension Whole-Body Haptic Sensor

    NASA Astrophysics Data System (ADS)

    Ito, Tatsuya; Tsuji, Toshiaki

    The authors have developed “haptic armor”, a whole-body haptic sensor that has an ability to estimate contact position. Although it is developed for safety assurance of robots in human environment, it can also be used as an interface. This paper proposes a command recognition method based on finger trace information. This paper also discusses some technical issues for improving recognition accuracy of this system.

  1. A Survey of Robotic Technology.

    DTIC Science & Technology

    1983-07-01

    developed the following definition of a robot: A robot is a reprogrammable multifunctional manipulator designed to move material, parts, tools, or specialized...subroutines subroutines commands to specific actuators, computations based on sensor data, etc. For instance, the job might be to assemble an automobile ...the set-up developed at Draper Labs to enable a robot to assemble an automobile alternator. The assembly operation is impressive to watch. The number

  2. Thermal Image Sensing Model for Robotic Planning and Search.

    PubMed

    Castro Jiménez, Lídice E; Martínez-García, Edgar A

    2016-08-08

    This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image's intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot's course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach.

  3. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    PubMed

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.

  4. Problems and research issues associated with the hybrid control of force and displacement

    NASA Technical Reports Server (NTRS)

    Paul, R. P.

    1987-01-01

    The hybrid control of force and position is basic to the science of robotics but is only poorly understood. Before much progress can be made in robotics, this problem needs to be solved in a robust manner. However, the use of hybrid control implies the existence of a model of the environment, not an exact model (as the function of hybrid control is to accommodate these errors), but a model appropriate for planning and reasoning. The monitored forces in position control are interpreted in terms of a model of the task as are the monitored displacements in force control. The reaction forces of the task of writing are far different from those of hammering. The programming of actions in such a modeled world becomes more complicated and systems of task level programming need to be developed. Sensor based robotics, of which force sensing is the most basic, implies an entirely new level of technology. Indeed, robot force sensors, no matter how compliant they may be, must be protected from accidental collisions. This implies other sensors to monitor task execution and again the use of a world model. This new level of technology is the task level, in which task actions are specified, not the actions of individual sensors and manipulators.

  5. Promoting Diversity in Undergraduate Research in Robotics-Based Seismic

    NASA Astrophysics Data System (ADS)

    Gifford, C. M.; Arthur, C. L.; Carmichael, B. L.; Webber, G. K.; Agah, A.

    2006-12-01

    The motivation for this research was to investigate forming evenly-spaced grid patterns with a team of mobile robots for future use in seismic imaging in polar environments. A team of robots was incrementally designed and simulated by incorporating sensors and altering each robot's controller. Challenges, design issues, and efficiency were also addressed. This research project incorporated the efforts of two undergraduate REU students from Elizabeth City State University (ECSU) in North Carolina, and the research staff at the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas. ECSU is a historically black university. Mentoring these two minority students in scientific research, seismic, robotics, and simulation will hopefully encourage them to pursue graduate degrees in science-related or engineering fields. The goals for this 10-week internship during summer 2006 were to educate the students in the fields of seismology, robotics, and virtual prototyping and simulation. Incrementally designing a robot platform for future enhancement and evaluation was central to this research, and involved simulation of several robots working together to change seismic grid shape and spacing. This process gave these undergraduate students experience and knowledge in an actual research project for a real-world application. The two undergraduate students gained valuable research experience and advanced their knowledge of seismic imaging, robotics, sensors, and simulation. They learned that seismic sensors can be used in an array to gather 2D and 3D images of the subsurface. They also learned that robotics can support dangerous or difficult human activities, such as those in a harsh polar environment, by increasing automation, robustness, and precision. Simulating robot designs also gave them experience in programming behaviors for mobile robots. Thus far, one academic paper has resulted from their research. This paper received third place at the 2006 National Technical Association's (NTA) National Conference in Chicago. CReSIS, in conjunction with ECSU, provided these minority students with a well-rounded educational experience in a real-world research project. Their contributions will be used for future projects.

  6. Learning for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.

  7. Nonuniform Deployment of Autonomous Agents in Harbor-Like Environments

    DTIC Science & Technology

    2014-11-12

    ith agent than to all other agents. Interested readers are referred to [55] for the comprehensive study on Voronoi partitioning and its applications...robots: An rfid approach, PhD dissertation, School of Electrical Engi- neering and Computer Science, University of Ottawa (October 2012). [55] A. Okabe, B...Gueaieb, A stochastic approach of mobile robot navigation using customized rfid sys- tems, International Conference on Signals, Circuits and Systems

  8. Sensor deployment on unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Gerhart, Grant R.; Witus, Gary

    2007-10-01

    TARDEC has been developing payloads for small robots as part of its unmanned ground vehicle (UGV) development programs. These platforms typically weigh less than 100 lbs and are used for various physical security and force protection applications. This paper will address a number of technical issues including platform mobility, payload positioning, sensor configuration and operational tradeoffs. TARDEC has developed a number of robots with different mobility mechanisms including track, wheel and hybrid track/wheel running gear configurations. An extensive discussion will focus upon omni-directional vehicle (ODV) platforms with enhanced intrinsic mobility for positioning sensor payloads. This paper also discusses tradeoffs between intrinsic platform mobility and articulated arm complexity for end point positioning of modular sensor packages.

  9. Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2003-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.

  10. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    PubMed

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  11. Software for Automation of Real-Time Agents, Version 2

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steve; Chouinard, Caroline; Engelhardt, Barbara; Wilklow, Colette; Mutz, Darren; Knight, Russell; Rabideau, Gregg; hide

    2005-01-01

    Version 2 of Closed Loop Execution and Recovery (CLEaR) has been developed. CLEaR is an artificial intelligence computer program for use in planning and execution of actions of autonomous agents, including, for example, Deep Space Network (DSN) antenna ground stations, robotic exploratory ground vehicles (rovers), robotic aircraft (UAVs), and robotic spacecraft. CLEaR automates the generation and execution of command sequences, monitoring the sequence execution, and modifying the command sequence in response to execution deviations and failures as well as new goals for the agent to achieve. The development of CLEaR has focused on the unification of planning and execution to increase the ability of the autonomous agent to perform under tight resource and time constraints coupled with uncertainty in how much of resources and time will be required to perform a task. This unification is realized by extending the traditional three-tier robotic control architecture by increasing the interaction between the software components that perform deliberation and reactive functions. The increase in interaction reduces the need to replan, enables earlier detection of the need to replan, and enables replanning to occur before an agent enters a state of failure.

  12. Vision robot with rotational camera for searching ID tags

    NASA Astrophysics Data System (ADS)

    Kimura, Nobutaka; Moriya, Toshio

    2008-02-01

    We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.

  13. Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case

    PubMed Central

    Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique

    2017-01-01

    This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144

  14. Genetic mechanism for designing new generation of buildings from data obtained by sensor agent robots

    NASA Astrophysics Data System (ADS)

    Ono, Chihiro; Mita, Akira

    2012-04-01

    Due to an increase in an elderly-people household, and global warming, the design of building spaces requires delicate consideration of the needs of elderly-people. Studies of intelligent spaces that can control suitable devices for residents may provide some of functions needed. However, these intelligent spaces are based on predefined scenarios so that it is difficult to handle unexpected circumstances and adapt to the needs of people. This study aims to suggest a Genetic adaption algorithm for building spaces. The feasibility of the algorithm is tested by simulation. The algorithm extend the existing design methodology by reflecting ongoing living information quickly in the variety of patterns.

  15. An optimal control strategy for two-dimensional motion camouflage with non-holonimic constraints.

    PubMed

    Rañó, Iñaki

    2012-07-01

    Motion camouflage is a stealth behaviour observed both in hover-flies and in dragonflies. Existing controllers for mimicking motion camouflage generate this behaviour on an empirical basis or without considering the kinematic motion restrictions present in animal trajectories. This study summarises our formal contributions to solve the generation of motion camouflage as a non-linear optimal control problem. The dynamics of the system capture the kinematic restrictions to motion of the agents, while the performance index ensures camouflage trajectories. An extensive set of simulations support the technique, and a novel analysis of the obtained trajectories contributes to our understanding of possible mechanisms to obtain sensor based motion camouflage, for instance, in mobile robots.

  16. Robotic Transnasal Endoscopic Skull Base Surgery: Systematic Review of the Literature and Report of a Novel Prototype for a Hybrid System (Brescia Endoscope Assistant Robotic Holder).

    PubMed

    Bolzoni Villaret, Andrea; Doglietto, Francesco; Carobbio, Andrea; Schreiber, Alberto; Panni, Camilla; Piantoni, Enrico; Guida, Giovanni; Fontanella, Marco Maria; Nicolai, Piero; Cassinis, Riccardo

    2017-09-01

    Although robotics has already been applied to several surgical fields, available systems are not designed for endoscopic skull base surgery (ESBS). New conception prototypes have been recently described for ESBS. The aim of this study was to provide a systematic literature review of robotics for ESBS and describe a novel prototype developed at the University of Brescia. PubMed and Scopus databases were searched using a combination of terms, including Robotics OR Robot and Surgery OR Otolaryngology OR Skull Base OR Holder. The retrieved papers were analyzed, recording the following features: interface, tools under robotic control, force feedback, safety systems, setup time, and operative time. A novel hybrid robotic system has been developed and tested in a preclinical setting at the University of Brescia, using an industrial manipulator and readily available off-the-shelf components. A total of 11 robotic prototypes for ESBS were identified. Almost all prototypes present a difficult emergency management as one of the main limits. The Brescia Endoscope Assistant Robotic holder has proven the feasibility of an intuitive robotic movement, using the surgeon's head position: a 6 degree of freedom sensor was used and 2 light sources were added to glasses that were therefore recognized by a commercially available sensor. Robotic system prototypes designed for ESBS and reported in the literature still present significant technical limitations. Hybrid robot assistance has a huge potential and might soon be feasible in ESBS. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Intelligent robot trends and predictions for the .net future

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.

    2001-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent and future technical and economic trends. During the past twenty years the use of industrial robots that are equipped not only with precise motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. Intelligent robot products have been developed in many cases for factory automation and for some hospital and home applications. To reach an even higher degree of applications, the addition of learning may be required. Recently, learning theories such as the adaptive critic have been proposed. In this type of learning, a critic provides a grade to the controller of an action module such as a robot. The adaptive critic is a good model for human learning. In general, the critic may be considered to be the human with the teach pendant, plant manager, line supervisor, quality inspector or the consumer. If the ultimate critic is the consumer, then the quality inspector must model the consumer's decision-making process and use this model in the design and manufacturing operations. Can the adaptive critic be used to advance intelligent robots? Intelligent robots have historically taken decades to be developed and reduced to practice. Methods for speeding this development include technology such as rapid prototyping and product development and government, industry and university cooperation.

  18. Development of intelligent robots - Achievements and issues

    NASA Astrophysics Data System (ADS)

    Nitzan, D.

    1985-03-01

    A flexible, intelligent robot is regarded as a general purpose machine system that may include effectors, sensors, computers, and auxiliary equipment and, like a human, can perform a variety of tasks under unpredictable conditions. Development of intelligent robots is essential for increasing the growth rate of today's robot population in industry and elsewhere. Robotics research and development topics include manipulation, end effectors, mobility, sensing (noncontact and contact), adaptive control, robot programming languages, and manufacturing process planning. Past achievements and current issues related to each of these topics are described briefly.

  19. Software development to support sensor control of robot arc welding

    NASA Technical Reports Server (NTRS)

    Silas, F. R., Jr.

    1986-01-01

    The development of software for a Digital Equipment Corporation MINC-23 Laboratory Computer to provide functions of a workcell host computer for Space Shuttle Main Engine (SSME) robotic welding is documented. Routines were written to transfer robot programs between the MINC and an Advanced Robotic Cyro 750 welding robot. Other routines provide advanced program editing features while additional software allows communicatin with a remote computer aided design system. Access to special robot functions were provided to allow advanced control of weld seam tracking and process control for future development programs.

  20. Interaction force and motion estimators facilitating impedance control of the upper limb rehabilitation robot.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Bengoa, Pablo; Jung, Je Hyung

    2017-07-01

    In order to enhance the performance of rehabilitation robots, it is imperative to know both force and motion caused by the interaction between user and robot. However, common direct measurement of both signals through force and motion sensors not only increases the complexity of the system but also impedes affordability of the system. As an alternative of the direct measurement, in this work, we present new force and motion estimators for the proper control of the upper-limb rehabilitation Universal Haptic Pantograph (UHP) robot. The estimators are based on the kinematic and dynamic model of the UHP and the use of signals measured by means of common low-cost sensors. In order to demonstrate the effectiveness of the estimators, several experimental tests were carried out. The force and impedance control of the UHP was implemented first by directly measuring the interaction force using accurate extra sensors and the robot performance was compared to the case where the proposed estimators replace the direct measured values. The experimental results reveal that the controller based on the estimators has similar performance to that using direct measurement (less than 1 N difference in root mean square error between two cases), indicating that the proposed force and motion estimators can facilitate implementation of interactive controller for the UHP in robotmediated rehabilitation trainings.

  1. An MRI-Guided Telesurgery System Using a Fabry-Perot Interferometry Force Sensor and a Pneumatic Haptic Device.

    PubMed

    Su, Hao; Shang, Weijian; Li, Gang; Patel, Niravkumar; Fischer, Gregory S

    2017-08-01

    This paper presents a surgical master-slave teleoperation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. The slave robot consists of a piezoelectrically actuated 6-degree-of-freedom (DOF) robot for needle placement with an integrated fiber optic force sensor (1-DOF axial force measurement) using the Fabry-Perot interferometry (FPI) sensing principle; it is configured to operate inside the bore of the MRI scanner during imaging. By leveraging the advantages of pneumatic and piezoelectric actuation in force and position control respectively, we have designed a pneumatically actuated master robot (haptic device) with strain gauge based force sensing that is configured to operate the slave from within the scanner room during imaging. The slave robot follows the insertion motion of the haptic device while the haptic device displays the needle insertion force as measured by the FPI sensor. Image interference evaluation demonstrates that the telesurgery system presents a signal to noise ratio reduction of less than 17% and less than 1% geometric distortion during simultaneous robot motion and imaging. Teleoperated needle insertion and rotation experiments were performed to reach 10 targets in a soft tissue-mimicking phantom with 0.70 ± 0.35 mm Cartesian space error.

  2. Distributed proximity sensor system having embedded light emitters and detectors

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan (Inventor)

    1990-01-01

    A distributed proximity sensor system is provided with multiple photosensitive devices and light emitters embedded on the surface of a robot hand or other moving member in a geometric pattern. By distributing sensors and emitters capable of detecting distances and angles to points on the surface of an object from known points in the geometric pattern, information is obtained for achieving noncontacting shape and distance perception, i.e., for automatic determination of the object's shape, direction and distance, as well as the orientation of the object relative to the robot hand or other moving member.

  3. Robotic surgery and hemostatic agents in partial nephrectomy: a high rate of success without vascular clamping.

    PubMed

    Morelli, Luca; Morelli, John; Palmeri, Matteo; D'Isidoro, Cristiano; Kauffmann, Emanuele Federico; Tartaglia, Dario; Caprili, Giovanni; Pisano, Roberta; Guadagni, Simone; Di Franco, Gregorio; Di Candio, Giulio; Mosca, Franco

    2015-09-01

    Robot-assisted partial nephrectomy has been proposed as a technique to overcome technical challenges of laparoscopic partial nephrectomy. We prospectively collected and analyzed data from 31 patients who underwent robotic partial nephrectomy with systematic use of hemostatic agents, between February 2009 and October 2014. Thirty-three renal tumors were treated in 31 patients. There were no conversions to open surgery, intraoperative complications, or blood transfusions. The mean size of the resected tumors was 27 mm (median 20 mm, range 5-40 mm). Twenty-seven of 33 lesions (82%) did not require vascular clamping and therefore were treated in the absence of ischemia. All margins were negative. The high partial nephrectomy success rate without vascular clamping suggests that robotic nephron-sparing surgery with systematic use of hemostatic agents may be a safe, effective method to completely avoid ischemia in the treatment of selected renal masses.

  4. Sensor supervision and multiagent commanding by means of projective virtual reality

    NASA Astrophysics Data System (ADS)

    Rossmann, Juergen

    1998-10-01

    When autonomous systems with multiple agents are considered, conventional control- and supervision technologies are often inadequate because the amount of information available is often presented in a way that the user is effectively overwhelmed by the displayed data. New virtual reality (VR) techniques can help to cope with this problem, because VR offers the chance to convey information in an intuitive manner and can combine supervision capabilities and new, intuitive approaches to the control of autonomous systems. In the approach taken, control and supervision issues were equally stressed and finally led to the new ideas and the general framework for Projective Virtual Reality. The key idea of this new approach for an intuitively operable man machine interface for decentrally controlled multi-agent systems is to let the user act in the virtual world, detect the changes and have an action planning component automatically generate task descriptions for the agents involved to project actions that have been carried out by users in the virtual world into the physical world, e.g. with the help of robots. Thus the Projective Virtual Reality approach is to split the job between the task deduction in the VR and the task `projection' onto the physical automation components by the automatic action planning component. Besides describing the realized projective virtual reality system, the paper will also describe in detail the metaphors and visualization aids used to present different types of (e.g. sensor-) information in an intuitively comprehensible manner.

  5. Dynamic multisensor fusion for mobile robot navigation in an indoor environment

    NASA Astrophysics Data System (ADS)

    Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.

    2001-10-01

    In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.

  6. Method for neural network control of motion using real-time environmental feedback

    NASA Technical Reports Server (NTRS)

    Buckley, Theresa M. (Inventor)

    1997-01-01

    A method of motion control for robotics and other automatically controlled machinery using a neural network controller with real-time environmental feedback. The method is illustrated with a two-finger robotic hand having proximity sensors and force sensors that provide environmental feedback signals. The neural network controller is taught to control the robotic hand through training sets using back- propagation methods. The training sets are created by recording the control signals and the feedback signal as the robotic hand or a simulation of the robotic hand is moved through a representative grasping motion. The data recorded is divided into discrete increments of time and the feedback data is shifted out of phase with the control signal data so that the feedback signal data lag one time increment behind the control signal data. The modified data is presented to the neural network controller as a training set. The time lag introduced into the data allows the neural network controller to account for the temporal component of the robotic motion. Thus trained, the neural network controlled robotic hand is able to grasp a wide variety of different objects by generalizing from the training sets.

  7. An orbital emulator for pursuit-evasion game theoretic sensor management

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Wang, Tao; Wang, Gang; Jia, Bin; Wang, Zhonghai; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2017-05-01

    This paper develops and evaluates an orbital emulator (OE) for space situational awareness (SSA). The OE can produce 3D satellite movements using capabilities generated from omni-wheeled robot and robotic arm motion methods. The 3D motion of a satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The 3D actions are emulated by omni-wheeled robot models while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. For multiple satellites, a fast map-merging algorithm is integrated into the robot operating system (ROS) and simultaneous localization and mapping (SLAM) routines to locate the multiple robots in the scene. The OE is used to demonstrate a pursuit-evasion (PE) game theoretic sensor management algorithm, which models conflicts between a space-based-visible (SBV) satellite (as pursuer) and a geosynchronous (GEO) satellite (as evader). The cost function of the PE game is based on the informational entropy of the SBV-tracking-GEO scenario. GEO can maneuver using a continuous and low thruster. The hard-in-loop space emulator visually illustrates the SSA problem solution based PE game.

  8. A learning controller for nonrepetitive robotic operation

    NASA Technical Reports Server (NTRS)

    Miller, W. T., III

    1987-01-01

    A practical learning control system is described which is applicable to complex robotic and telerobotic systems involving multiple feedback sensors and multiple command variables. In the controller, the learning algorithm is used to learn to reproduce the nonlinear relationship between the sensor outputs and the system command variables over particular regions of the system state space, rather than learning the actuator commands required to perform a specific task. The learned information is used to predict the command signals required to produce desired changes in the sensor outputs. The desired sensor output changes may result from automatic trajectory planning or may be derived from interactive input from a human operator. The learning controller requires no a priori knowledge of the relationships between the sensor outputs and the command variables. The algorithm is well suited for real time implementation, requiring only fixed point addition and logical operations. The results of learning experiments using a General Electric P-5 manipulator interfaced to a VAX-11/730 computer are presented. These experiments involved interactive operator control, via joysticks, of the position and orientation of an object in the field of view of a video camera mounted on the end of the robot arm.

  9. Tool actuation and force feedback on robot-assisted microsurgery system

    NASA Technical Reports Server (NTRS)

    Das, Hari (Inventor); Ohm, Tim R. (Inventor); Boswell, Curtis D. (Inventor); Steele, Robert D. (Inventor)

    2002-01-01

    An input control device with force sensors is configured to sense hand movements of a surgeon performing a robot-assisted microsurgery. The sensed hand movements actuate a mechanically decoupled robot manipulator. A microsurgical manipulator, attached to the robot manipulator, is activated to move small objects and perform microsurgical tasks. A force-feedback element coupled to the robot manipulator and the input control device provides the input control device with an amplified sense of touch in the microsurgical manipulator.

  10. Evolutionary Design of a Robotic Material Defect Detection System

    NASA Technical Reports Server (NTRS)

    Ballard, Gary; Howsman, Tom; Craft, Mike; ONeil, Daniel; Steincamp, Jim; Howell, Joe T. (Technical Monitor)

    2002-01-01

    During the post-flight inspection of SSME engines, several inaccessible regions must be disassembled to inspect for defects such as cracks, scratches, gouges, etc. An improvement to the inspection process would be the design and development of very small robots capable of penetrating these inaccessible regions and detecting the defects. The goal of this research was to utilize an evolutionary design approach for the robotic detection of these types of defects. A simulation and visualization tool was developed prior to receiving the hardware as a development test bed. A small, commercial off-the-shelf (COTS) robot was selected from several candidates as the proof of concept robot. The basic approach to detect the defects was to utilize Cadmium Sulfide (CdS) sensors to detect changes in contrast of an illuminated surface. A neural network, optimally designed utilizing a genetic algorithm, was employed to detect the presence of the defects (cracks). By utilization of the COTS robot and US sensors, the research successfully demonstrated that an evolutionarily designed neural network can detect the presence of surface defects.

  11. Cooperative Robots to Observe Moving Targets: Review.

    PubMed

    Khan, Asif; Rinner, Bernhard; Cavallaro, Andrea

    2018-01-01

    The deployment of multiple robots for achieving a common goal helps to improve the performance, efficiency, and/or robustness in a variety of tasks. In particular, the observation of moving targets is an important multirobot application that still exhibits numerous open challenges, including the effective coordination of the robots. This paper reviews control techniques for cooperative mobile robots monitoring multiple targets. The simultaneous movement of robots and targets makes this problem particularly interesting, and our review systematically addresses this cooperative multirobot problem for the first time. We classify and critically discuss the control techniques: cooperative multirobot observation of multiple moving targets, cooperative search, acquisition, and track, cooperative tracking, and multirobot pursuit evasion. We also identify the five major elements that characterize this problem, namely, the coordination method, the environment, the target, the robot and its sensor(s). These elements are used to systematically analyze the control techniques. The majority of the studied work is based on simulation and laboratory studies, which may not accurately reflect real-world operational conditions. Importantly, while our systematic analysis is focused on multitarget observation, our proposed classification is useful also for related multirobot applications.

  12. A Face Attention Technique for a Robot Able to Interpret Facial Expressions

    NASA Astrophysics Data System (ADS)

    Simplício, Carlos; Prado, José; Dias, Jorge

    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.

  13. Tactile surface classification for limbed robots using a pressure sensitive robot skin.

    PubMed

    Shill, Jacob J; Collins, Emmanuel G; Coyle, Eric; Clark, Jonathan

    2015-02-02

    This paper describes an approach to terrain identification based on pressure images generated through direct surface contact using a robot skin constructed around a high-resolution pressure sensing array. Terrain signatures for classification are formulated from the magnitude frequency responses of the pressure images. The initial experimental results for statically obtained images show that the approach yields classification accuracies [Formula: see text]. The methodology is extended to accommodate the dynamic pressure images anticipated when a robot is walking or running. Experiments with a one-legged hopping robot yield similar identification accuracies [Formula: see text]. In addition, the accuracies are independent with respect to changing robot dynamics (i.e., when using different leg gaits). The paper further shows that the high-resolution capabilities of the sensor enables similarly textured surfaces to be distinguished. A correcting filter is developed to accommodate for failures or faults that inevitably occur within the sensing array with continued use. Experimental results show using the correcting filter can extend the effective operational lifespan of a high-resolution sensing array over 6x in the presence of sensor damage. The results presented suggest this methodology can be extended to autonomous field robots, providing a robot with crucial information about the environment that can be used to aid stable and efficient mobility over rough and varying terrains.

  14. The MITy micro-rover: Sensing, control, and operation

    NASA Technical Reports Server (NTRS)

    Malafeew, Eric; Kaliardos, William

    1994-01-01

    The sensory, control, and operation systems of the 'MITy' Mars micro-rover are discussed. It is shown that the customized sun tracker and laser rangefinder provide internal, autonomous dead reckoning and hazard detection in unstructured environments. The micro-rover consists of three articulated platforms with sensing, processing and payload subsystems connected by a dual spring suspension system. A reactive obstacle avoidance routine makes intelligent use of robot-centered laser information to maneuver through cluttered environments. The hazard sensors include a rangefinder, inclinometers, proximity sensors and collision sensors. A 486/66 laptop computer runs the graphical user interface and programming environment. A graphical window displays robot telemetry in real time and a small TV/VCR is used for real time supervisory control. Guidance, navigation, and control routines work in conjunction with the mapping and obstacle avoidance functions to provide heading and speed commands that maneuver the robot around obstacles and towards the target.

  15. Capaciflector-guided mechanisms

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1996-01-01

    A plurality of capaciflector proximity sensors, one or more of which may be overlaid on each other, and at least one shield are mounted on a device guided by a robot so as to see a designated surface, hole or raised portion of an object, for example, in three dimensions. Individual current-measuring voltage follower circuits interface the sensors and shield to a common AC signal source. As the device approaches the object, the sensors respond by a change in the currents therethrough. The currents are detected by the respective current-measuring voltage follower circuits with the outputs thereof being fed to a robot controller. The device is caused to move under robot control in a predetermined pattern over the object while directly referencing each other without any offsets, whereupon by a process of minimization of the sensed currents, the device is dithered or wiggled into position for a soft touchdown or contact without any prior contact with the object.

  16. A design of endoscopic imaging system for hyper long pipeline based on wheeled pipe robot

    NASA Astrophysics Data System (ADS)

    Zheng, Dongtian; Tan, Haishu; Zhou, Fuqiang

    2017-03-01

    An endoscopic imaging system of hyper long pipeline is designed to acquire the inner surface image in advance for the hyper long pipeline detects measurement. The system consists of structured light sensors, pipe robots and control system. The pipe robot is in the form of wheel structure, with the sensor which is at the front of the vehicle body. The control system is at the tail of the vehicle body in the form of upper and lower computer. The sensor can be translated and scanned in three steps: walking, lifting and scanning, then the inner surface image can be acquired at a plurality of positions and different angles. The results of imaging experiments show that the system's transmission distance is longer, the acquisition angle is more diverse and the result is more comprehensive than the traditional imaging system, which lays an important foundation for later inner surface vision measurement.

  17. Object positioning in storages of robotized workcells using LabVIEW Vision

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banaś, W.; Sękala, A.; Gwiazda, A.; Foit, K.; Kost, G.

    2015-11-01

    During the manufacturing process, each performed task is previously developed and adapted to the conditions and the possibilities of the manufacturing plant. The production process is supervised by a team of specialists because any downtime causes great loss of time and hence financial loss. Sensors used in industry for tracking and supervision various stages of a production process make it much easier to maintain it continuous. One of groups of sensors used in industrial applications are non-contact sensors. This group includes: light barriers, optical sensors, rangefinders, vision systems, and ultrasonic sensors. Through to the rapid development of electronics the vision systems were widespread as the most flexible type of non-contact sensors. These systems consist of cameras, devices for data acquisition, devices for data analysis and specialized software. Vision systems work well as sensors that control the production process itself as well as the sensors that control the product quality level. The LabVIEW program as well as the LabVIEW Vision and LabVIEW Builder represent the application that enables program the informatics system intended to process and product quality control. The paper presents elaborated application for positioning elements in a robotized workcell. Basing on geometric parameters of manipulated object or on the basis of previously developed graphical pattern it is possible to determine the position of particular manipulated elements. This application could work in an automatic mode and in real time cooperating with the robot control system. It allows making the workcell functioning more autonomous.

  18. Construction of multi-agent mobile robots control system in the problem of persecution with using a modified reinforcement learning method based on neural networks

    NASA Astrophysics Data System (ADS)

    Patkin, M. L.; Rogachev, G. N.

    2018-02-01

    A method for constructing a multi-agent control system for mobile robots based on training with reinforcement using deep neural networks is considered. Synthesis of the management system is proposed to be carried out with reinforcement training and the modified Actor-Critic method, in which the Actor module is divided into Action Actor and Communication Actor in order to simultaneously manage mobile robots and communicate with partners. Communication is carried out by sending partners at each step a vector of real numbers that are added to the observation vector and affect the behaviour. Functions of Actors and Critic are approximated by deep neural networks. The Critics value function is trained by using the TD-error method and the Actor’s function by using DDPG. The Communication Actor’s neural network is trained through gradients received from partner agents. An environment in which a cooperative multi-agent interaction is present was developed, computer simulation of the application of this method in the control problem of two robots pursuing two goals was carried out.

  19. Automated Guided Vehicle For Phsically Handicapped People - A Cost Effective Approach

    NASA Astrophysics Data System (ADS)

    Kumar, G. Arun, Dr.; Sivasubramaniam, Mr. A.

    2017-12-01

    Automated Guided vehicle (AGV) is like a robot that can deliver the materials from the supply area to the technician automatically. This is faster and more efficient. The robot can be accessed wirelessly. A technician can directly control the robot to deliver the components rather than control it via a human operator (over phone, computer etc. who has to program the robot or ask a delivery person to make the delivery). The vehicle is automatically guided through its ways. To avoid collisions a proximity sensor is attached to the system. The sensor senses the signals of the obstacles and can stop the vehicle in the presence of obstacles. Thus vehicle can avoid accidents that can be very useful to the present industrial trend and material handling and equipment handling will be automated and easy time saving methodology.

  20. Fast obstacle detection based on multi-sensor information fusion

    NASA Astrophysics Data System (ADS)

    Lu, Linli; Ying, Jie

    2014-11-01

    Obstacle detection is one of the key problems in areas such as driving assistance and mobile robot navigation, which cannot meet the actual demand by using a single sensor. A method is proposed to realize the real-time access to the information of the obstacle in front of the robot and calculating the real size of the obstacle area according to the mechanism of the triangle similarity in process of imaging by fusing datum from a camera and an ultrasonic sensor, which supports the local path planning decision. In the part of image analyzing, the obstacle detection region is limited according to complementary principle. We chose ultrasonic detection range as the region for obstacle detection when the obstacle is relatively near the robot, and the travelling road area in front of the robot is the region for a relatively-long-distance detection. The obstacle detection algorithm is adapted from a powerful background subtraction algorithm ViBe: Visual Background Extractor. We extracted an obstacle free region in front of the robot in the initial frame, this region provided a reference sample set of gray scale value for obstacle detection. Experiments of detecting different obstacles at different distances respectively, give the accuracy of the obstacle detection and the error percentage between the calculated size and the actual size of the detected obstacle. Experimental results show that the detection scheme can effectively detect obstacles in front of the robot and provide size of the obstacle with relatively high dimensional accuracy.

  1. Simultaneous Deployment and Tracking Multi-Robot Strategies with Connectivity Maintenance

    PubMed Central

    Tardós, Javier; Aragues, Rosario; Sagüés, Carlos; Rubio, Carlos

    2018-01-01

    Multi-robot teams composed of ground and aerial vehicles have gained attention during the last few years. We present a scenario where both types of robots must monitor the same area from different view points. In this paper, we propose two Lloyd-based tracking strategies to allow the ground robots (agents) to follow the aerial ones (targets), keeping the connectivity between the agents. The first strategy establishes density functions on the environment so that the targets acquire more importance than other zones, while the second one iteratively modifies the virtual limits of the working area depending on the positions of the targets. We consider the connectivity maintenance due to the fact that coverage tasks tend to spread the agents as much as possible, which is addressed by restricting their motions so that they keep the links of a minimum spanning tree of the communication graph. We provide a thorough parametric study of the performance of the proposed strategies under several simulated scenarios. In addition, the methods are implemented and tested using realistic robotic simulation environments and real experiments. PMID:29558446

  2. Knowledge assistant for robotic environmental characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feddema, J.; Rivera, J.; Tucker, S.

    1996-08-01

    A prototype sensor fusion framework called the {open_quotes}Knowledge Assistant{close_quotes} has been developed and tested on a gantry robot at Sandia National Laboratories. This Knowledge Assistant guides the robot operator during the planning, execution, and post analysis stages of the characterization process. During the planning stage, the Knowledge Assistant suggests robot paths and speeds based on knowledge of sensors available and their physical characteristics. During execution, the Knowledge Assistant coordinates the collection of data through a data acquisition {open_quotes}specialist.{close_quotes} During execution and postanalysis, the Knowledge Assistant sends raw data to other {open_quotes}specialists,{close_quotes} which include statistical pattern recognition software, a neural network,more » and model-based search software. After the specialists return their results, the Knowledge Assistant consolidates the information and returns a report to the robot control system where the sensed objects and their attributes (e.g., estimated dimensions, weight, material composition, etc.) are displayed in the world model. This report highlights the major components of this system.« less

  3. On autonomous terrain model acquistion by a mobile robot

    NASA Technical Reports Server (NTRS)

    Rao, N. S. V.; Iyengar, S. S.; Weisbin, C. R.

    1987-01-01

    The following problem is considered: A point robot is placed in a terrain populated by an unknown number of polyhedral obstacles of varied sizes and locations in two/three dimensions. The robot is equipped with a sensor capable of detecting all the obstacle vertices and edges that are visible from the present location of the robot. The robot is required to autonomously navigate and build the complete terrain model using the sensor information. It is established that the necessary number of scanning operations needed for complete terrain model acquisition by any algorithm that is based on scan from vertices strategy is given by the summation of i = 1 (sup n) N(O sub i)-n and summation of i = 1 (sup n) N(O sub i)-2n in two- and three-dimensional terrains respectively, where O = (O sub 1, O sub 2,....O sub n) set of the obstacles in the terrain, and N(O sub i) is the number of vertices of the obstacle O sub i.

  4. Sensing Pressure Distribution on a Lower-Limb Exoskeleton Physical Human-Machine Interface

    PubMed Central

    De Rossi, Stefano Marco Maria; Vitiello, Nicola; Lenzi, Tommaso; Ronsse, Renaud; Koopman, Bram; Persichetti, Alessandro; Vecchi, Fabrizio; Ijspeert, Auke Jan; van der Kooij, Herman; Carrozza, Maria Chiara

    2011-01-01

    A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer’s skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented. PMID:22346574

  5. Observation and imitation of actions performed by humans, androids, and robots: an EMG study

    PubMed Central

    Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.

    2015-01-01

    Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782

  6. Exception handling for sensor fusion

    NASA Astrophysics Data System (ADS)

    Chavez, G. T.; Murphy, Robin R.

    1993-08-01

    This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.

  7. Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots

    PubMed Central

    Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles

    2017-01-01

    The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software. PMID:28832510

  8. Sensor fusion IV: Control paradigms and data structures; Proceedings of the Meeting, Boston, MA, Nov. 12-15, 1991

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.

  9. Comparison of Piezoresistive Monofilament Polymer Sensors

    PubMed Central

    Melnykowycz, Mark; Koll, Birgit; Scharf, Dagobert; Clemens, Frank

    2014-01-01

    The development of flexible polymer monofilament fiber strain sensors have many applications in both wearable computing (clothing, gloves, etc.) and robotics design (large deformation control). For example, a high-stretch monofilament sensor could be integrated into robotic arm design, easily stretching over joints or along curved surfaces. As a monofilament, the sensor can be woven into or integrated with textiles for position or physiological monitoring, computer interface control, etc. Commercially available conductive polymer monofilament sensors were tested alongside monofilaments produced from carbon black (CB) mixed with a thermo-plastic elastomer (TPE) and extruded in different diameters. It was found that signal strength, drift, and precision characteristics were better with a 0.3 mm diameter CB/TPE monofilament than thick (∼2 mm diameter) based on the same material or commercial monofilaments based on natural rubber or silicone elastomer (SE) matrices. PMID:24419161

  10. EAP artificial muscle actuators for bio-inspired intelligent social robotics (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hanson, David F.

    2017-04-01

    Bio-inspired intelligent robots are coming of age in both research and industry, propelling market growth for robots and A.I. However, conventional motors limit bio-inspired robotics. EAP actuators and sensors could improve the simplicity, compliance, physical scaling, and offer bio-inspired advantages in robotic locomotion, grasping and manipulation, and social expressions. For EAP actuators to realize their transformative potential, further innovations are needed: the actuators must be robust, fast, powerful, manufacturable, and affordable. This presentation surveys progress, opportunities, and challenges in the author's latest work in social robots and EAP actuators, and proposes a roadmap for EAP actuators in bio-inspired intelligent robotics.

  11. Designing and implementing transparency for real time inspection of autonomous robots

    NASA Astrophysics Data System (ADS)

    Theodorou, Andreas; Wortham, Robert H.; Bryson, Joanna J.

    2017-07-01

    The EPSRC's Principles of Robotics advises the implementation of transparency in robotic systems, however research related to AI transparency is in its infancy. This paper introduces the reader of the importance of having transparent inspection of intelligent agents and provides guidance for good practice when developing such agents. By considering and expanding upon other prominent definitions found in literature, we provide a robust definition of transparency as a mechanism to expose the decision-making of a robot. The paper continues by addressing potential design decisions developers need to consider when designing and developing transparent systems. Finally, we describe our new interactive intelligence editor, designed to visualise, develop and debug real-time intelligence.

  12. Astrobee: Space Station Robotic Free Flyer

    NASA Technical Reports Server (NTRS)

    Provencher, Chris; Bualat, Maria G.; Barlow, Jonathan; Fong, Terrence W.; Smith, Marion F.; Smith, Ernest E.; Sanchez, Hugo S.

    2016-01-01

    Astrobee is a free flying robot that will fly inside the International Space Station and primarily serve as a research platform for robotics in zero gravity. Astrobee will also provide mobile camera views to ISS flight and payload controllers, and collect various sensor data within the ISS environment for the ISS Program. Astrobee consists of two free flying robots, a dock, and ground data system. This presentation provides an overview, high level design description, and project status.

  13. Cooperative system and method using mobile robots for testing a cooperative search controller

    DOEpatents

    Byrne, Raymond H.; Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.

    2002-01-01

    A test system for testing a controller provides a way to use large numbers of miniature mobile robots to test a cooperative search controller in a test area, where each mobile robot has a sensor, a communication device, a processor, and a memory. A method of using a test system provides a way for testing a cooperative search controller using multiple robots sharing information and communicating over a communication network.

  14. Multidisciplinary approach for developing a new robotic system for domiciliary assistance to elderly people.

    PubMed

    Cavallo, F; Aquilano, M; Bonaccorsi, M; Mannari, I; Carrozza, M C; Dario, P

    2011-01-01

    This paper aims to show the effectiveness of a (inter / multi)disciplinary team, based on the technology developers, elderly care organizations, and designers, in developing the ASTRO robotic system for domiciliary assistance to elderly people. The main issues presented in this work concern the improvement of robot's behavior by means of a smart sensor network able to share information with the robot for localization and navigation, and the design of the robot's appearance and functionalities by means of a substantial analysis of users' requirements and attitude to robotic technology to improve acceptability and usability.

  15. Evaluation of Sensor Configurations for Robotic Surgical Instruments

    PubMed Central

    Gómez-de-Gabriel, Jesús M.; Harwin, William

    2015-01-01

    Designing surgical instruments for robotic-assisted minimally-invasive surgery (RAMIS) is challenging due to constraints on the number and type of sensors imposed by considerations such as space or the need for sterilization. A new method for evaluating the usability of virtual teleoperated surgical instruments based on virtual sensors is presented. This method uses virtual prototyping of the surgical instrument with a dual physical interaction, which allows testing of different sensor configurations in a real environment. Moreover, the proposed approach has been applied to the evaluation of prototypes of a two-finger grasper for lump detection by remote pinching. In this example, the usability of a set of five different sensor configurations, with a different number of force sensors, is evaluated in terms of quantitative and qualitative measures in clinical experiments with 23 volunteers. As a result, the smallest number of force sensors needed in the surgical instrument that ensures the usability of the device can be determined. The details of the experimental setup are also included. PMID:26516863

  16. Evaluation of Sensor Configurations for Robotic Surgical Instruments.

    PubMed

    Gómez-de-Gabriel, Jesús M; Harwin, William

    2015-10-27

    Designing surgical instruments for robotic-assisted minimally-invasive surgery (RAMIS) is challenging due to constraints on the number and type of sensors imposed by considerations such as space or the need for sterilization. A new method for evaluating the usability of virtual teleoperated surgical instruments based on virtual sensors is presented. This method uses virtual prototyping of the surgical instrument with a dual physical interaction, which allows testing of different sensor configurations in a real environment. Moreover, the proposed approach has been applied to the evaluation of prototypes of a two-finger grasper for lump detection by remote pinching. In this example, the usability of a set of five different sensor configurations, with a different number of force sensors, is evaluated in terms of quantitative and qualitative measures in clinical experiments with 23 volunteers. As a result, the smallest number of force sensors needed in the surgical instrument that ensures the usability of the device can be determined. The details of the experimental setup are also included.

  17. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    PubMed Central

    Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong

    2011-01-01

    In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104

  18. 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot

    PubMed Central

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569

  19. A Smart Sensor for Defending against Clock Glitching Attacks on the I2C Protocol in Robotic Applications

    PubMed Central

    Jiménez-Naharro, Raúl; Gómez-Bravo, Fernando; Medina-García, Jonathan; Sánchez-Raya, Manuel; Gómez-Galán, Juan Antonio

    2017-01-01

    This paper presents a study about hardware attacking and clock signal vulnerability. It considers a particular type of attack on the clock signal in the I2C protocol, and proposes the design of a new sensor for detecting and defending against this type of perturbation. The analysis of the attack and the defense is validated by means of a configurable experimental platform that emulates a differential drive robot. A set of experimental results confirm the interest of the studied vulnerabilities and the efficiency of the proposed sensor in defending against this type of situation. PMID:28346337

  20. This "Ethical Trap" Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.

    PubMed

    Miller, Keith W; Wolf, Marty J; Grodzinsky, Frances

    2017-04-01

    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading.

  1. Cooperative Three-Robot System for Traversing Steep Slopes

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terrance; Aghazarian, Hrand; Younse, Paulo; Garrett, Michael

    2009-01-01

    Teamed Robots for Exploration and Science in Steep Areas (TRESSA) is a system of three autonomous mobile robots that cooperate with each other to enable scientific exploration of steep terrain (slope angles up to 90 ). Originally intended for use in exploring steep slopes on Mars that are not accessible to lone wheeled robots (Mars Exploration Rovers), TRESSA and systems like TRESSA could also be used on Earth for performing rescues on steep slopes and for exploring steep slopes that are too remote or too dangerous to be explored by humans. TRESSA is modeled on safe human climbing of steep slopes, two key features of which are teamwork and safety tethers. Two of the autonomous robots, denoted Anchorbots, remain at the top of a slope; the third robot, denoted the Cliffbot, traverses the slope. The Cliffbot drives over the cliff edge supported by tethers, which are payed out from the Anchorbots (see figure). The Anchorbots autonomously control the tension in the tethers to counter the gravitational force on the Cliffbot. The tethers are payed out and reeled in as needed, keeping the body of the Cliffbot oriented approximately parallel to the local terrain surface and preventing wheel slip by controlling the speed of descent or ascent, thereby enabling the Cliffbot to drive freely up, down, or across the slope. Due to the interactive nature of the three-robot system, the robots must be very tightly coupled. To provide for this tight coupling, the TRESSA software architecture is built on a combination of (1) the multi-robot layered behavior-coordination architecture reported in "An Architecture for Controlling Multiple Robots" (NPO-30345), NASA Tech Briefs, Vol. 28, No. 10 (October 2004), page 65, and (2) the real-time control architecture reported in "Robot Electronics Architecture" (NPO-41784), NASA Tech Briefs, Vol. 32, No. 1 (January 2008), page 28. The combination architecture makes it possible to keep the three robots synchronized and coordinated, to use data from all three robots for decision- making at each step, and to control the physical connections among the robots. In addition, TRESSA (as in prior systems that have utilized this architecture) , incorporates a capability for deterministic response to unanticipated situations from yet another architecture reported in Control Architecture for Robotic Agent Command and Sensing (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40. Tether tension control is a major consideration in the design and operation of TRESSA. Tension is measured by force sensors connected to each tether at the Cliffbot. The direction of the tension (both azimuth and elevation) is also measured. The tension controller combines a controller to counter gravitational force and an optional velocity controller that anticipates the motion of the Cliffbot. The gravity controller estimates the slope angle from the inclination of the tethers. This angle and the weight of the Cliffbot determine the total tension needed to counteract the weight of the Cliffbot. The total needed tension is broken into components for each Anchorbot. The difference between this needed tension and the tension measured at the Cliffbot constitutes an error signal that is provided to the gravity controller. The velocity controller computes the tether speed needed to produce the desired motion of the Cliffbot. Another major consideration in the design and operation of TRESSA is detection of faults. Each robot in the TRESSA system monitors its own performance and the performance of its teammates in order to detect any system faults and prevent unsafe conditions. At startup, communication links are tested and if any robot is not communicating, the system refuses to execute any motion commands. Prior to motion, the Anchorbots attempt to set tensions in the tethers at optimal levels for counteracting the weight of the Cliffbot; if either Anchorbot fails to reach its optimal tension level within a specified time, it sends message to the other robots and the commanded motion is not executed. If any mechanical error (e.g., stalling of a motor) is detected, the affected robot sends a message triggering stoppage of the current motion. Lastly, messages are passed among the robots at each time step (10 Hz) to share sensor information during operations. If messages from any robot cease for more than an allowable time interval, the other robots detect the communication loss and initiate stoppage.

  2. Vector-algebra approach to extract Denavit-Hartenberg parameters of assembled robot arms

    NASA Technical Reports Server (NTRS)

    Barker, L. K.

    1983-01-01

    The Denavit-Hartenberg parameters characterize the joint axis systems in a robot arm and, naturally, appear in the transformation matrices from one joint axis system to another. These parameters are needed in the control of robot arms and in the passage of sensor information along the arm. This paper presents a vector algebra method to determine these parameters for any assembled robot arm. The idea is to measure the location of the robot hand (or extension) for different joint angles and then use these measurements to calculate the parameters.

  3. Nonholonomic Ofject Tracking with Optical Sensors and Ofject Recognition Feedback

    NASA Technical Reports Server (NTRS)

    Goddard, R. E.; Hadaegh, F.

    1994-01-01

    Robotic controllers frequently operate under constraints. Often, the constraints are imperfectly or completely unknown. In this paper, the Lagrangian dynamics of a planar robot arm are expressed as a function of a globally unknown consraint.

  4. Extreme Mechanics in Soft Pneumatic Robots and Soft Microfluidic Electronics and Sensors

    NASA Astrophysics Data System (ADS)

    Majidi, Carmel

    2012-02-01

    In the near future, machines and robots will be completely soft, stretchable, impact resistance, and capable of adapting their shape and functionality to changes in mission and environment. Similar to biological tissue and soft-body organisms, these next-generation technologies will contain no rigid parts and instead be composed entirely of soft elastomers, gels, fluids, and other non-rigid matter. Using a combination of rapid prototyping tools, microfabrication methods, and emerging techniques in so-called ``soft lithography,'' scientists and engineers are currently introducing exciting new families of soft pneumatic robots, soft microfluidic sensors, and hyperelastic electronics that can be stretched to as much as 10x their natural length. Progress has been guided by an interdisciplinary collection of insights from chemistry, life sciences, robotics, microelectronics, and solid mechanics. In virtually every technology and application domain, mechanics and elasticity have a central role in governing functionality and design. Moreover, in contrast to conventional machines and electronics, soft pneumatic systems and microfluidics typically operate in the finite deformation regime, with materials stretching to several times their natural length. In this talk, I will review emerging paradigms in soft pneumatic robotics and soft microfluidic electronics and highlight modeling and design challenges that arise from the extreme mechanics of inflation, locomotion, sensor operation, and human interaction. I will also discuss perceived challenges and opportunities in a broad range of potential application, from medicine to wearable computing.

  5. Micro-Power Sources Enabling Robotic Outpost Based Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    West, W. C.; Whitacre, J. F.; Ratnakumar, B. V.; Brandon, E. J.; Studor, G. F.

    2001-01-01

    Robotic outpost based exploration represents a fundamental shift in mission design from conventional, single spacecraft missions towards a distributed risk approach with many miniaturized semi-autonomous robots and sensors. This approach can facilitate wide-area sampling and exploration, and may consist of a web of orbiters, landers, or penetrators. To meet the mass and volume constraints of deep space missions such as the Europa Ocean Science Station, the distributed units must be fully miniaturized to fully leverage the wide-area exploration approach. However, presently there is a dearth of available options for powering these miniaturized sensors and robots. This group is currently examining miniaturized, solid state batteries as candidates to meet the demand of applications requiring low power, mass, and volume micro-power sources. These applications may include powering microsensors, battery-backing rad-hard CMOS memory and providing momentary chip back-up power. Additional information is contained in the original extended abstract.

  6. Development of sensor augmented robotic weld systems for aerospace propulsion system fabrication

    NASA Technical Reports Server (NTRS)

    Jones, C. S.; Gangl, K. J.

    1986-01-01

    In order to meet stringent performance goals for power and reuseability, the Space Shuttle Main Engine was designed with many complex, difficult welded joints that provide maximum strength and minimum weight. To this end, the SSME requires 370 meters of welded joints. Automation of some welds has improved welding productivity significantly over manual welding. Application has previously been limited by accessibility constraints, requirements for complex process control, low production volumes, high part variability, and stringent quality requirements. Development of robots for welding in this application requires that a unique set of constraints be addressed. This paper shows how robotic welding can enhance production of aerospace components by addressing their specific requirements. A development program at the Marshall Space Flight Center combining industrial robots with state-of-the-art sensor systems and computer simulation is providing technology for the automation of welds in Space Shuttle Main Engine production.

  7. Autonomous Mission Operations for Sensor Webs

    NASA Astrophysics Data System (ADS)

    Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.

    2008-12-01

    We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) Sensor Model Language (SensorML) concepts and structures. The agents are currently deployed on the U.S. Naval Academy MidSTAR-1 satellite and are actively managing the power subsystem on-orbit without the need for human intervention.

  8. Intelligent, self-contained robotic hand

    DOEpatents

    Krutik, Vitaliy; Doo, Burt; Townsend, William T.; Hauptman, Traveler; Crowell, Adam; Zenowich, Brian; Lawson, John

    2007-01-30

    A robotic device has a base and at least one finger having at least two links that are connected in series on rotary joints with at least two degrees of freedom. A brushless motor and an associated controller are located at each joint to produce a rotational movement of a link. Wires for electrical power and communication serially connect the controllers in a distributed control network. A network operating controller coordinates the operation of the network, including power distribution. At least one, but more typically two to five, wires interconnect all the controllers through one or more joints. Motor sensors and external world sensors monitor operating parameters of the robotic hand. The electrical signal output of the sensors can be input anywhere on the distributed control network. V-grooves on the robotic hand locate objects precisely and assist in gripping. The hand is sealed, immersible and has electrical connections through the rotary joints for anodizing in a single dunk without masking. In various forms, this intelligent, self-contained, dexterous hand, or combinations of such hands, can perform a wide variety of object gripping and manipulating tasks, as well as locomotion and combinations of locomotion and gripping.

  9. Process for anodizing a robotic device

    DOEpatents

    Townsend, William T [Weston, MA

    2011-11-08

    A robotic device has a base and at least one finger having at least two links that are connected in series on rotary joints with at least two degrees of freedom. A brushless motor and an associated controller are located at each joint to produce a rotational movement of a link. Wires for electrical power and communication serially connect the controllers in a distributed control network. A network operating controller coordinates the operation of the network, including power distribution. At least one, but more typically two to five, wires interconnect all the controllers through one or more joints. Motor sensors and external world sensors monitor operating parameters of the robotic hand. The electrical signal output of the sensors can be input anywhere on the distributed control network. V-grooves on the robotic hand locate objects precisely and assist in gripping. The hand is sealed, immersible and has electrical connections through the rotary joints for anodizing in a single dunk without masking. In various forms, this intelligent, self-contained, dexterous hand, or combinations of such hands, can perform a wide variety of object gripping and manipulating tasks, as well as locomotion and combinations of locomotion and gripping.

  10. Collective search by mobile robots using alpha-beta coordination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsmith, S.Y.; Robinett, R. III

    1998-04-01

    One important application of mobile robots is searching a geographical region to locate the origin of a specific sensible phenomenon. Mapping mine fields, extraterrestrial and undersea exploration, the location of chemical and biological weapons, and the location of explosive devices are just a few potential applications. Teams of robotic bloodhounds have a simple common goal; to converge on the location of the source phenomenon, confirm its intensity, and to remain aggregated around it until directed to take some other action. In cases where human intervention through teleoperation is not possible, the robot team must be deployed in a territory withoutmore » supervision, requiring an autonomous decentralized coordination strategy. This paper presents the alpha beta coordination strategy, a family of collective search algorithms that are based on dynamic partitioning of the robotic team into two complementary social roles according to a sensor based status measure. Robots in the alpha role are risk takers, motivated to improve their status by exploring new regions of the search space. Robots in the beta role are motivated to improve but are conservative, and tend to remain aggregated and stationary until the alpha robots have identified better regions of the search space. Roles are determined dynamically by each member of the team based on the status of the individual robot relative to the current state of the collective. Partitioning the robot team into alpha and beta roles results in a balance between exploration and exploitation, and can yield collective energy savings and improved resistance to sensor noise and defectors. Alpha robots waste energy exploring new territory, and are more sensitive to the effects of ambient noise and to defectors reporting inflated status. Beta robots conserve energy by moving in a direct path to regions of confirmed high status.« less

  11. Audio-Visual Perception System for a Humanoid Robotic Head

    PubMed Central

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  12. Experimental Robot Model Adjustments Based on Force–Torque Sensor Information

    PubMed Central

    2018-01-01

    The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM) to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM), is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP). Thanks to the use of information from force–torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics. PMID:29534477

  13. Cooperative terrain model acquisition by a team of two or three point-robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, N.S.V.; Protopopescu, V.; Manickam, N.

    1996-04-01

    We address the model acquisition problem for an unknown planar terrain by a team of two or three robots. The terrain is cluttered by a finite number of polygonal obstacles whose shapes and positions are unknown. The robots are point-sized and equipped with visual sensors which acquire all visible parts of the terrain by scan operations executed from their locations. The robots communicate with each other via wireless connection. The performance is measured by the number of the sensor (scan) operations which are assumed to be the most time-consuming of all the robot operations. We employ the restricted visibility graphmore » methods in a hierarchical setup. For terrains with convex obstacles and for teams of n(= 2, 3) robots, we prove that the sensing time is reduced by a factor of 1/n. For terrains with concave corners, the performance of the algorithm depends on the number of concave regions and their depths. A hierarchical decomposition of the restricted visibility graph into n-connected and (n - 1)-or-less connected components is considered. The performance for the n(= 2, 3) robot team is expressed in terms of the sizes of n-connected components, and the sizes and diameters of (n - 1)-or-less connected components.« less

  14. Iconic memory-based omnidirectional route panorama navigation.

    PubMed

    Yagi, Yasushi; Imai, Kousuke; Tsuji, Kentaro; Yachida, Masahiko

    2005-01-01

    A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.

  15. Addressing the Movement of a Freescale Robotic Car Using Neural Network

    NASA Astrophysics Data System (ADS)

    Horváth, Dušan; Cuninka, Peter

    2016-12-01

    This article deals with the management of a Freescale small robotic car along the predefined guide line. Controlling of the direction of movement of the robot is performed by neural networks, and scales (memory) of neurons are calculated by Hebbian learning from the truth tables as learning with a teacher. Reflexive infrared sensors serves as inputs. The results are experiments, which are used to compare two methods of mobile robot control - tracking lines.

  16. Trusted Remote Operation of Proximate Emergy Robots (TROOPER): DARPA Robotics Challenge

    DTIC Science & Technology

    2015-12-01

    sensor in each of the robot’s feet. Additionally, there is a 6-axis IMU that sits in the robot’s pelvis cage. While testing before the Finals, the...Services. Many of the controllers in the autonomic layer have overlapping requirements, such as filtered IMU and force torque data from the robot...the following services during the DRC: • IMU Filtering • Force Torque Filtering • Joint State Publishing • TF (Transform) Broadcasting • Robot Pose

  17. Trusted Remote Operation of Proximate Emergency Robots (TROOPER): DARPA Robotics Challenge

    DTIC Science & Technology

    2015-12-01

    sensor in each of the robot’s feet. Additionally, there is a 6-axis IMU that sits in the robot’s pelvis cage. While testing before the Finals, the...Services. Many of the controllers in the autonomic layer have overlapping requirements, such as filtered IMU and force torque data from the robot...the following services during the DRC: • IMU Filtering • Force Torque Filtering • Joint State Publishing • TF (Transform) Broadcasting • Robot Pose

  18. The Modular Design and Production of an Intelligent Robot Based on a Closed-Loop Control Strategy.

    PubMed

    Zhang, Libo; Zhu, Junjie; Ren, Hao; Liu, Dongdong; Meng, Dan; Wu, Yanjun; Luo, Tiejian

    2017-10-14

    Intelligent robots are part of a new generation of robots that are able to sense the surrounding environment, plan their own actions and eventually reach their targets. In recent years, reliance upon robots in both daily life and industry has increased. The protocol proposed in this paper describes the design and production of a handling robot with an intelligent search algorithm and an autonomous identification function. First, the various working modules are mechanically assembled to complete the construction of the work platform and the installation of the robotic manipulator. Then, we design a closed-loop control system and a four-quadrant motor control strategy, with the aid of debugging software, as well as set steering gear identity (ID), baud rate and other working parameters to ensure that the robot achieves the desired dynamic performance and low energy consumption. Next, we debug the sensor to achieve multi-sensor fusion to accurately acquire environmental information. Finally, we implement the relevant algorithm, which can recognize the success of the robot's function for a given application. The advantage of this approach is its reliability and flexibility, as the users can develop a variety of hardware construction programs and utilize the comprehensive debugger to implement an intelligent control strategy. This allows users to set personalized requirements based on their needs with high efficiency and robustness.

  19. A Survey and Experimental Evaluation of Proximity Sensors for Space Robotics

    NASA Technical Reports Server (NTRS)

    Volpe, Richard

    1993-01-01

    This paper provides an overview of our selction process for proximity sensors for manipulator collison avoidance. Five categories of sensors have been considered for this use in space operations: Intensity of reflection, triangulation, time of flight, capacitive, and iductive.

  20. Enhancing Perception with Tactile Object Recognition in Adaptive Grippers for Human-Robot Interaction.

    PubMed

    Gandarias, Juan M; Gómez-de-Gabriel, Jesús M; García-Cerezo, Alfonso J

    2018-02-26

    The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs) using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM). Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more), with a lower mean of pressure values (up to 72% less) than when using a rigid sensor, with a softer grip, which is needed in physical human-robot interaction (pHRI). A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78%) with a rigid sensor.

Top