Vision servo of industrial robot: A review
NASA Astrophysics Data System (ADS)
Zhang, Yujin
2018-04-01
Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.
A Practical Solution Using A New Approach To Robot Vision
NASA Astrophysics Data System (ADS)
Hudson, David L.
1984-01-01
Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.
Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.
van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline
2010-11-01
In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.
2009-01-01
The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses
Quaternions in computer vision and robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pervin, E.; Webb, J.A.
1982-01-01
Computer vision and robotics suffer from not having good tools for manipulating three-dimensional objects. Vectors, coordinate geometry, and trigonometry all have deficiencies. Quaternions can be used to solve many of these problems. Many properties of quaternions that are relevant to computer vision and robotics are developed. Examples are given showing how quaternions can be used to simplify derivations in computer vision and robotics.
Use of 3D vision for fine robot motion
NASA Technical Reports Server (NTRS)
Lokshin, Anatole; Litwin, Todd
1989-01-01
An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.
Hierarchical Modelling Of Mobile, Seeing Robots
NASA Astrophysics Data System (ADS)
Luh, Cheng-Jye; Zeigler, Bernard P.
1990-03-01
This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
Hierarchical modelling of mobile, seeing robots
NASA Technical Reports Server (NTRS)
Luh, Cheng-Jye; Zeigler, Bernard P.
1990-01-01
This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
2017-06-01
FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...June 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW TECHNIQUE FOR ROBOT VISION IN AUTONOMOUS UNDERWATER...Developing a technique for underwater robot vision is a key factor in establishing autonomy in underwater vehicles. A new technique is developed and
Design of a surgical robot with dynamic vision field control for Single Port Endoscopic Surgery.
Kobayashi, Yo; Sekiguchi, Yuta; Tomono, Yu; Watanabe, Hiroki; Toyoda, Kazutaka; Konishi, Kozo; Tomikawa, Morimasa; Ieiri, Satoshi; Tanoue, Kazuo; Hashizume, Makoto; Fujie, Masaktsu G
2010-01-01
Recently, a robotic system was developed to assist Single Port Endoscopic Surgery (SPS). However, the existing system required a manual change of vision field, hindering the surgical task and increasing the degrees of freedom (DOFs) of the manipulator. We proposed a surgical robot for SPS with dynamic vision field control, the endoscope view being manipulated by a master controller. The prototype robot consisted of a positioning and sheath manipulator (6 DOF) for vision field control, and dual tool tissue manipulators (gripping: 5DOF, cautery: 3DOF). Feasibility of the robot was demonstrated in vitro. The "cut and vision field control" (using tool manipulators) is suitable for precise cutting tasks in risky areas while a "cut by vision field control" (using a vision field control manipulator) is effective for rapid macro cutting of tissues. A resection task was accomplished using a combination of both methods.
Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar
2004-07-01
In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.
Stereo vision with distance and gradient recognition
NASA Astrophysics Data System (ADS)
Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu
2007-12-01
Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.
Three-Dimensional Images For Robot Vision
NASA Astrophysics Data System (ADS)
McFarland, William D.
1983-12-01
Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.
NASA Technical Reports Server (NTRS)
Lewandowski, Leon; Struckman, Keith
1994-01-01
Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.
Machine Vision Giving Eyes to Robots. Resources in Technology.
ERIC Educational Resources Information Center
Technology Teacher, 1990
1990-01-01
This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)
A remote assessment system with a vision robot and wearable sensors.
Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun
2004-01-01
This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.
ROBOSIGHT: Robotic Vision System For Inspection And Manipulation
NASA Astrophysics Data System (ADS)
Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh
1989-02-01
Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.
A robotic vision system to measure tree traits
USDA-ARS?s Scientific Manuscript database
The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...
Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects
NASA Technical Reports Server (NTRS)
Montes, Leticia; Bowers, David; Lumia, Ron
1998-01-01
This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.
An egocentric vision based assistive co-robot.
Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang
2013-06-01
We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.
Understanding of and applications for robot vision guidance at KSC
NASA Technical Reports Server (NTRS)
Shawaga, Lawrence M.
1988-01-01
The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.
Robotics research projects report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsia, T.C.
The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)
Real Time Target Tracking Using Dedicated Vision Hardware
NASA Astrophysics Data System (ADS)
Kambies, Keith; Walsh, Peter
1988-03-01
This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.
An assembly system based on industrial robot with binocular stereo vision
NASA Astrophysics Data System (ADS)
Tang, Hong; Xiao, Nanfeng
2017-01-01
This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.
Robotic space simulation integration of vision algorithms into an orbital operations simulation
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.
1987-01-01
In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.
Simplifying applications software for vision guided robot implementation
NASA Technical Reports Server (NTRS)
Duncheon, Charlie
1994-01-01
A simple approach to robot applications software is described. The idea is to use commercially available software and hardware wherever possible to minimize system costs, schedules and risks. The U.S. has been slow in the adaptation of robots and flexible automation compared to the fluorishing growth of robot implementation in Japan. The U.S. can benefit from this approach because of a more flexible array of vision guided robot technologies.
System and method for controlling a vision guided robot assembly
Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.
2017-03-07
A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.
A lightweight, inexpensive robotic system for insect vision.
Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex
2017-09-01
Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Paar, G.
2009-04-01
At present, mainly the US have realized planetary space missions with essential robotics background. Joining institutions, companies and universities from different established groups in Europe and two relevant players from the US, the EC FP7 Project PRoVisG started in autumn 2008 to demonstrate the European ability of realizing high-level processing of robotic vision image products from the surface of planetary bodies. PRoVisG will build a unified European framework for Robotic Vision Ground Processing. State-of-art computer vision technology will be collected inside and outside Europe to better exploit the image data gathered during past, present and future robotic space missions to the Moon and the Planets. This will lead to a significant enhancement of the scientific, technologic and educational outcome of such missions. We report on the main PRoVisG objectives and the development status: - Past, present and future planetary robotic mission profiles are analysed in terms of existing solutions and requirements for vision processing - The generic processing chain is based on unified vision sensor descriptions and processing interfaces. Processing components available at the PRoVisG Consortium Partners will be completed by and combined with modules collected within the international computer vision community in the form of Announcements of Opportunity (AOs). - A Web GIS is developed to integrate the processing results obtained with data from planetary surfaces into the global planetary context. - Towards the end of the 39 month project period, PRoVisG will address the public by means of a final robotic field test in representative terrain. The European tax payers will be able to monitor the imaging and vision processing in a Mars - similar environment, thus getting an insight into the complexity and methods of processing, the potential and decision making of scientific exploitation of such data and not least the elegancy and beauty of the resulting image products and their visualization. - The educational aspect is addressed by two summer schools towards the end of the project, presenting robotic vision to the students who are future providers of European science and technology, inside and outside the space domain.
Sensor Control of Robot Arc Welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1983-01-01
The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.
Enhanced operator perception through 3D vision and haptic feedback
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren
2012-06-01
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.
Shared control of a medical robot with haptic guidance.
Xiong, Linfei; Chng, Chin Boon; Chui, Chee Kong; Yu, Peiwu; Li, Yao
2017-01-01
Tele-operation of robotic surgery reduces the radiation exposure during the interventional radiological operations. However, endoscope vision without force feedback on the surgical tool increases the difficulty for precise manipulation and the risk of tissue damage. The shared control of vision and force provides a novel approach of enhanced control with haptic guidance, which could lead to subtle dexterity and better maneuvrability during MIS surgery. The paper provides an innovative shared control method for robotic minimally invasive surgery system, in which vision and haptic feedback are incorporated to provide guidance cues to the clinician during surgery. The incremental potential field (IPF) method is utilized to generate a guidance path based on the anatomy of tissue and surgical tool interaction. Haptic guidance is provided at the master end to assist the clinician during tele-operative surgical robotic task. The approach has been validated with path following and virtual tumor targeting experiments. The experiment results demonstrate that comparing with vision only guidance, the shared control with vision and haptics improved the accuracy and efficiency of surgical robotic manipulation, where the tool-position error distance and execution time are reduced. The validation experiment demonstrates that the shared control approach could help the surgical robot system provide stable assistance and precise performance to execute the designated surgical task. The methodology could also be implemented with other surgical robot with different surgical tools and applications.
A novel method of robot location using RFID and stereo vision
NASA Astrophysics Data System (ADS)
Chen, Diansheng; Zhang, Guanxin; Li, Zhen
2012-04-01
This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin
2015-04-22
Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.
Integrating Mobile Robotics and Vision with Undergraduate Computer Science
ERIC Educational Resources Information Center
Cielniak, G.; Bellotto, N.; Duckett, T.
2013-01-01
This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…
Machine Vision For Industrial Control:The Unsung Opportunity
NASA Astrophysics Data System (ADS)
Falkman, Gerald A.; Murray, Lawrence A.; Cooper, James E.
1984-05-01
Vision modules have primarily been developed to relieve those pressures newly brought into existence by Inspection (QUALITY) and Robotic (PRODUCTIVITY) mandates. Industrial Control pressure stems on the other hand from the older first industrial revolution mandate of throughput. Satisfying such pressure calls for speed in both imaging and decision making. Vision companies have, however, put speed on a backburner or ignore it entirely because most modules are computer/software based which limits their speed potential. Increasingly, the keynote being struck at machine vision seminars is that "Visual and Computational Speed Must Be Increased and Dramatically!" There are modular hardwired-logic systems that are fast but, all too often, they are not very bright. Such units: Measure the fill factor of bottles as they spin by, Read labels on cans, Count stacked plastic cups or Monitor the width of parts streaming past the camera. Many are only a bit more complex than a photodetector. Once in place, most of these units are incapable of simple upgrading to a new task and are Vision's analog to the robot industry's pick and place (RIA TYPE E) robot. Vision thus finds itself amidst the same quandries that once beset the Robot Industry of America when it tried to define a robot, excluded dumb ones, and was left with only slow machines whose unit volume potential is shatteringly low. This paper develops an approach to meeting the need of a vision system that cuts a swath into the terra incognita of intelligent, high-speed vision processing. Main attention is directed to vision for industrial control. Some presently untapped vision application areas that will be serviced include: Electronics, Food, Sports, Pharmaceuticals, Machine Tools and Arc Welding.
Remote-controlled vision-guided mobile robot system
NASA Astrophysics Data System (ADS)
Ande, Raymond; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.
Computing Optic Flow with ArduEye Vision Sensor
2013-01-01
processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the
Vision Based Autonomous Robotic Control for Advanced Inspection and Repair
NASA Technical Reports Server (NTRS)
Wehner, Walter S.
2014-01-01
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F
2016-03-05
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.
Design And Implementation Of Integrated Vision-Based Robotic Workcells
NASA Astrophysics Data System (ADS)
Chen, Michael J.
1985-01-01
Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.
2016-01-01
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030
Perception for mobile robot navigation: A survey of the state of the art
NASA Technical Reports Server (NTRS)
Kortenkamp, David
1994-01-01
In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.
Robot path planning using expert systems and machine vision
NASA Astrophysics Data System (ADS)
Malone, Denis E.; Friedrich, Werner E.
1992-02-01
This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.
Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin
2015-01-01
Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust. PMID:25912350
NASA Astrophysics Data System (ADS)
Dong, Gangqi; Zhu, Z. H.
2016-04-01
This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.
The Development of a Robot-Based Learning Companion: A User-Centered Design Approach
ERIC Educational Resources Information Center
Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong
2015-01-01
A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Vision-based mapping with cooperative robots
NASA Astrophysics Data System (ADS)
Little, James J.; Jennings, Cullen; Murray, Don
1998-10-01
Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.
Three-dimensional vision enhances task performance independently of the surgical method.
Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A
2012-10-01
Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.
Progress in building a cognitive vision system
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Lyons, Damian; Yue, Hong
2016-05-01
We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.
A color-coded vision scheme for robotics
NASA Technical Reports Server (NTRS)
Johnson, Kelley Tina
1991-01-01
Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.
Sensory Interactive Teleoperator Robotic Grasping
NASA Technical Reports Server (NTRS)
Alark, Keli; Lumia, Ron
1997-01-01
As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.
ARK: Autonomous mobile robot in an industrial environment
NASA Technical Reports Server (NTRS)
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
1994-01-01
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1973-01-01
A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.
Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J
2005-01-01
We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.
Manifold learning in machine vision and robotics
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-02-01
Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.
NASA Astrophysics Data System (ADS)
Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan
2010-02-01
The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.
Reliable vision-guided grasping
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.
Vision technology/algorithms for space robotics applications
NASA Technical Reports Server (NTRS)
Krishen, Kumar; Defigueiredo, Rui J. P.
1987-01-01
The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.
Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun
2011-01-01
In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408
3D vision system for intelligent milking robot automation
NASA Astrophysics Data System (ADS)
Akhloufi, M. A.
2013-12-01
In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
3D vision upgrade kit for TALON robot
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-04-01
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Robotic vision. [process control applications
NASA Technical Reports Server (NTRS)
Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.
1979-01-01
Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.
The research on visual industrial robot which adopts fuzzy PID control algorithm
NASA Astrophysics Data System (ADS)
Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye
2017-03-01
The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.
Vision-based obstacle recognition system for automated lawn mower robot development
NASA Astrophysics Data System (ADS)
Mohd Zin, Zalhan; Ibrahim, Ratnawati
2011-06-01
Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-25
... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc.), Security... information concerning the securities of Channel America Television Network, Inc. because it has not filed any...
Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja
2016-03-01
Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.
Impact of 2D and 3D vision on performance of novice subjects using da Vinci robotic system.
Blavier, A; Gaudissart, Q; Cadière, G B; Nyssen, A S
2006-01-01
The aim of this study was to evaluate the impact of 3D and 2D vision on performance of novice subjects using da Vinci robotic system. 224 nurses without any surgical experience were divided into two groups and executed a motor task with the robotic system in 2D for one group and with the robotic system in 3D for the other group. Time to perform the task was recorded. Our data showed significant better time performance in 3D view (24.67 +/- 11.2) than in 2D view (40.26 +/- 17.49, P < 0.001). Our findings emphasized the advantage of 3D vision over 2D view in performing surgical task, encouraging the development of efficient and less expensive 3D systems in order to improve the accuracy of surgical gesture, the resident training and the operating time.
NASA Astrophysics Data System (ADS)
van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario
2017-11-01
Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.
Marking parts to aid robot vision
NASA Technical Reports Server (NTRS)
Bales, J. W.; Barker, L. K.
1981-01-01
The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.
Method of mobile robot indoor navigation by artificial landmarks with use of computer vision
NASA Astrophysics Data System (ADS)
Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.
2018-05-01
The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.
Design and control of an embedded vision guided robotic fish with multiple control surfaces.
Yu, Junzhi; Wang, Kai; Tan, Min; Zhang, Jianwei
2014-01-01
This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface.
Design and Control of an Embedded Vision Guided Robotic Fish with Multiple Control Surfaces
Wang, Kai; Tan, Min; Zhang, Jianwei
2014-01-01
This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface. PMID:24688413
Function-based design process for an intelligent ground vehicle vision system
NASA Astrophysics Data System (ADS)
Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.
2010-10-01
An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.
Design of a dynamic test platform for autonomous robot vision systems
NASA Technical Reports Server (NTRS)
Rich, G. C.
1980-01-01
The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.
Robotic Anesthesia – A Vision for the Future of Anesthesia
Hemmerling, Thomas M; Taddei, Riccardo; Wehbe, Mohamad; Morse, Joshua; Cyr, Shantale; Zaouter, Cedrick
2011-01-01
Summary This narrative review describes a rationale for robotic anesthesia. It offers a first classification of robotic anesthesia by separating it into pharmacological robots and robots for aiding or replacing manual gestures. Developments in closed loop anesthesia are outlined. First attempts to perform manual tasks using robots are described. A critical analysis of the delayed development and introduction of robots in anesthesia is delivered. PMID:23905028
Vision-based semi-autonomous outdoor robot system to reduce soldier workload
NASA Astrophysics Data System (ADS)
Richardson, Al; Rodgers, Michael H.
2001-09-01
Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.
Development of dog-like retrieving capability in a ground robot
NASA Astrophysics Data System (ADS)
MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary
2013-01-01
This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.
Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.
Deng, Fucheng; Zhu, Xiaorui; He, Chao
2017-09-13
Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.
Expedient range enhanced 3-D robot colour vision
NASA Astrophysics Data System (ADS)
Jarvis, R. A.
1983-01-01
Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.
Multidisciplinary unmanned technology teammate (MUTT)
NASA Astrophysics Data System (ADS)
Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark
2013-01-01
The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.
NASA Astrophysics Data System (ADS)
Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian
2012-06-01
Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.
Robotic Quantification of Position Sense in Children With Perinatal Stroke.
Kuczynski, Andrea M; Dukelow, Sean P; Semrau, Jennifer A; Kirton, Adam
2016-09-01
Background Perinatal stroke is the leading cause of hemiparetic cerebral palsy. Motor deficits and their treatment are commonly emphasized in the literature. Sensory dysfunction may be an important contributor to disability, but it is difficult to measure accurately clinically. Objective Use robotics to quantify position sense deficits in hemiparetic children with perinatal stroke and determine their association with common clinical measures. Methods Case-control study. Participants were children aged 6 to 19 years with magnetic resonance imaging-confirmed unilateral perinatal arterial ischemic stroke or periventricular venous infarction and symptomatic hemiparetic cerebral palsy. Participants completed a position matching task using an exoskeleton robotic device (KINARM). Position matching variability, shift, and expansion/contraction area were measured with and without vision. Robotic outcomes were compared across stroke groups and controls and to clinical measures of disability (Assisting Hand Assessment) and sensory function. Results Forty stroke participants (22 arterial, 18 venous, median age 12 years, 43% female) were compared with 60 healthy controls. Position sense variability was impaired in arterial (6.01 ± 1.8 cm) and venous (5.42 ± 1.8 cm) stroke compared to controls (3.54 ± 0.9 cm, P < .001) with vision occluded. Impairment remained when vision was restored. Robotic measures correlated with functional disability. Sensitivity and specificity of clinical sensory tests were modest. Conclusions Robotic assessment of position sense is feasible in children with perinatal stroke. Impairment is common and worse in arterial lesions. Limited correction with vision suggests cortical sensory network dysfunction. Disordered position sense may represent a therapeutic target in hemiparetic cerebral palsy. © The Author(s) 2016.
A simple, inexpensive, and effective implementation of a vision-guided autonomous robot
NASA Astrophysics Data System (ADS)
Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James
2006-10-01
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Chen, Alexander Y. K.
1991-01-01
Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.
Influence of control parameters on the joint tracking performance of a coaxial weld vision system
NASA Technical Reports Server (NTRS)
Gangl, K. J.; Weeks, J. L.
1985-01-01
The first phase of a series of evaluations of a vision-based welding control sensor for the Space Shuttle Main Engine Robotic Welding System is described. The robotic welding system is presently under development at the Marshall Space Flight Center. This evaluation determines the standard control response parameters necessary for proper trajectory of the welding torch along the joint.
Robots in Space -Psychological Aspects
NASA Technical Reports Server (NTRS)
Sipes, Walter E.
2006-01-01
A viewgraph presentation on the psychological aspects of developing robots to perform routine operations associated with monitoring, inspection, maintenance and repair in space is shown. The topics include: 1) Purpose; 2) Vision; 3) Current Robots in Space; 4) Ground Based Robots; 5) AERCam; 6) Rotating Bladder Robot (ROBLR); 7) DART; 8) Robonaut; 9) Full Immersion Telepresence Testbed; 10) ERA; and 11) Psychological Aspects
NASA Astrophysics Data System (ADS)
Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad
2009-02-01
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.
Virtual Reality System Offers a Wide Perspective
NASA Technical Reports Server (NTRS)
2008-01-01
Robot Systems Technology Branch engineers at Johnson Space Center created the remotely controlled Robonaut for use as an additional "set of hands" in extravehicular activities (EVAs) and to allow exploration of environments that would be too dangerous or difficult for humans. One of the problems Robonaut developers encountered was that the robot s interface offered an extremely limited field of vision. Johnson robotics engineer, Darby Magruder, explained that the 40-degree field-of-view (FOV) in initial robotic prototypes provided very narrow tunnel vision, which posed difficulties for Robonaut operators trying to see the robot s surroundings. Because of the narrow FOV, NASA decided to reach out to the private sector for assistance. In addition to a wider FOV, NASA also desired higher resolution in a head-mounted display (HMD) with the added ability to capture and display video.
Robotic Attention Processing And Its Application To Visual Guidance
NASA Astrophysics Data System (ADS)
Barth, Matthew; Inoue, Hirochika
1988-03-01
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
Chinellato, Eris; Del Pobil, Angel P
2009-06-01
The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.
JPL Robotics Technology Applicable to Agriculture
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol Gabriel; Kyte, L.
2008-01-01
This slide presentation describes several technologies that are developed for robotics that are applicable for agriculture. The technologies discussed are detection of humans to allow safe operations of autonomous vehicles, and vision guided robotic techniques for shoot selection, separation and transfer to growth media,
USAF Summer Faculty Research Program. 1981 Research Reports. Volume I.
1981-10-01
Kent, OH 44242 (216) 672-2816 Dr. Martin D. Altschuler Degree: PhD, Physics and Astronomy, 1964 Associate Professor Specialty: Robot Vision, Surface...line inspection and control, computer- aided manufacturing, robot vision, mapping of machine parts and castings, etc. The technique we developed...posture, reduced healing time and bacteria level, and improved capacity for work endurance and efficiency. 1 ,2 Federal agencies, such as the FDA and
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
Advanced Robotics for Air Force Operations
1989-06-01
evaluated current and potential uses of advanced robotics to support Air Force systems, (2) recommended the most effective aplications of advanced robotics...manpower. Such a robot system would The boom would not only transfer fuel, be considerably more mobile and effi- 10 ADVANCED ROBOTICS FOR AIR FORCE...increased manpower resources in war tive clothing reduce vision, hearing, and make this an attractive potential appli- mobility , which further reduce
A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems
Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua
2013-01-01
A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597
GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa
2004-01-01
The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.
Using advanced computer vision algorithms on small mobile robots
NASA Astrophysics Data System (ADS)
Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.
2006-05-01
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.
Bio-inspired vision based robot control using featureless estimations of time-to-contact.
Zhang, Haijie; Zhao, Jianguo
2017-01-31
Marvelous vision based dynamic behaviors of insects and birds such as perching, landing, and obstacle avoidance have inspired scientists to propose the idea of time-to-contact, which is defined as the time for a moving observer to contact an object or surface if the current velocity is maintained. Since with only a vision sensor, time-to-contact can be directly estimated from consecutive images, it is widely used for a variety of robots to fulfill various tasks such as obstacle avoidance, docking, chasing, perching and landing. However, most of existing methods to estimate the time-to-contact need to extract and track features during the control process, which is time-consuming and cannot be applied to robots with limited computation power. In this paper, we adopt a featureless estimation method, extend this method to more general settings with angular velocities, and improve the estimation results using Kalman filtering. Further, we design an error based controller with gain scheduling strategy to control the motion of mobile robots. Experiments for both estimation and control are conducted using a customized mobile robot platform with low-cost embedded systems. Onboard experimental results demonstrate the effectiveness of the proposed approach, with the robot being controlled to successfully dock in front of a vertical wall. The estimation and control methods presented in this paper can be applied to computation-constrained miniature robots for agile locomotion such as landing, docking, or navigation.
Vision Based Localization in Urban Environments
NASA Technical Reports Server (NTRS)
McHenry, Michael; Cheng, Yang; Matthies, Larry
2005-01-01
As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.
Implementation of a robotic flexible assembly system
NASA Technical Reports Server (NTRS)
Benton, Ronald C.
1987-01-01
As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.
Robotic Vision, Tray-Picking System Design Using Multiple, Optical Matched Filters
NASA Astrophysics Data System (ADS)
Leib, Kenneth G.; Mendelsohn, Jay C.; Grieve, Philip G.
1986-10-01
The optical correlator is applied to a robotic vision, tray-picking problem. Complex matched filters (MFs) are designed to provide sufficient optical memory for accepting any orientation of the desired part, and a multiple holographic lens (MHL) is used to increase the memory for continuous coverage. It is shown that with appropriate thresholding a small part can be selected using optical matched filters. A number of criteria are presented for optimizing the vision system. Two of the part-filled trays that Mendelsohn used are considered in this paper which is the analog (optical) expansion of his paper. Our view in this paper is that of the optical correlator as a cueing device for subsequent, finer vision techniques.
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.
Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu
2015-08-01
This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.
Vision Guided Intelligent Robot Design And Experiments
NASA Astrophysics Data System (ADS)
Slutzky, G. D.; Hall, E. L.
1988-02-01
The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.
NASA Astrophysics Data System (ADS)
Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques
2005-06-01
The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.
Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators
Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi
2013-01-01
Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations. PMID:23928891
Kinesthetic deficits after perinatal stroke: robotic measurement in hemiparetic children.
Kuczynski, Andrea M; Semrau, Jennifer A; Kirton, Adam; Dukelow, Sean P
2017-02-15
While sensory dysfunction is common in children with hemiparetic cerebral palsy (CP) secondary to perinatal stroke, it is an understudied contributor to disability with limited objective measurement tools. Robotic technology offers the potential to objectively measure complex sensorimotor function but has been understudied in perinatal stroke. The present study aimed to quantify kinesthetic deficits in hemiparetic children with perinatal stroke and determine their association with clinical function. Case-control study. Participants were 6-19 years of age. Stroke participants had MRI confirmed unilateral perinatal arterial ischemic stroke or periventricular venous infarction, and symptomatic hemiparetic cerebral palsy. Participants completed a robotic assessment of upper extremity kinesthesia using a robotic exoskeleton (KINARM). Four kinesthetic parameters (response latency, initial direction error, peak speed ratio, and path length ratio) and their variabilities were measured with and without vision. Robotic outcomes were compared across stroke groups and controls and to clinical measures of sensorimotor function. Forty-three stroke participants (23 arterial, 20 venous, median age 12 years, 42% female) were compared to 106 healthy controls. Stroke cases displayed significantly impaired kinesthesia that remained when vision was restored. Kinesthesia was more impaired in arterial versus venous lesions and correlated with clinical measures. Robotic assessment of kinesthesia is feasible in children with perinatal stroke. Kinesthetic impairment is common and associated with stroke type. Failure to correct with vision suggests sensory network dysfunction.
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-01-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Astrophysics Data System (ADS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-02-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
Recycling of electrical motors by automatic disassembly
NASA Astrophysics Data System (ADS)
Karlsson, Björn; Järrhed, Jan-Ove
2000-04-01
This paper presents a robotized workstation for end-of-life treatment of electrical motors with an electrical effect of about 1 kW. These motors can, for example, be found in washing machines and in industry. There are two main steps in the work. The first step is an inspection whereby the functionality of the motor is checked and classification either for re-use or for disassembly is done. In the second step the motors classified for disassembly are disassembled in a robotized automatic station. In the initial step measurements are performed during a start-up sequence of about 1 s. By measuring the rotation speed and the current and voltage of the three phases of the motor classification for either reuse or disassembly can be done. During the disassembly work, vision data are fused in order to classify the motors according to their type. The vision system also feeds the control system of the robot with various object co-ordinates, to facilitate correct operation of the robot. Finally, tests with a vision system and eddy-current equipment are performed to decide whether all copper has been removed from the stator.
Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces
NASA Technical Reports Server (NTRS)
Altschuler, M. D.; Altschuler, B. R.; Taboada, J.
1981-01-01
It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.
IMU-based online kinematic calibration of robot manipulator.
Du, Guanglong; Zhang, Ping
2013-01-01
Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Algorithms and architectures for robot vision
NASA Technical Reports Server (NTRS)
Schenker, Paul S.
1990-01-01
The scope of the current work is to develop practical sensing implementations for robots operating in complex, partially unstructured environments. A focus in this work is to develop object models and estimation techniques which are specific to requirements of robot locomotion, approach and avoidance, and grasp and manipulation. Such problems have to date received limited attention in either computer or human vision - in essence, asking not only how perception is in general modeled, but also what is the functional purpose of its underlying representations. As in the past, researchers are drawing on ideas from both the psychological and machine vision literature. Of particular interest is the development 3-D shape and motion estimates for complex objects when given only partial and uncertain information and when such information is incrementally accrued over time. Current studies consider the use of surface motion, contour, and texture information, with the longer range goal of developing a fused sensing strategy based on these sources and others.
NASA Astrophysics Data System (ADS)
Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping
2017-12-01
In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.
NASA Astrophysics Data System (ADS)
1985-01-01
A new invention by scientists who have copied the structure of a human eye will help replace a human telescope-watching astronomer with a robot. It will be possible to provide technical vision not only for robot astronomers but also for their industrial fellow robots. So far, an artificial eye with dimensions close to those of a human eye discerns only black-and-white images. But already the second model of the eye is to perceive colors as well. Polymers which are suited for the role of the coat of an eye, lens, and vitreous body were applied. The retina has been replaced with a bundle of the finest glass filaments through which light rays get onto photomultipliers. They can be positioned outside the artificial eye. The main thing is to prevent great losses in the light guide.
NASA Technical Reports Server (NTRS)
Watzin, James G.; Burt, Joseph; Tooley, Craig
2004-01-01
The Vision for Space Exploration calls for undertaking lunar exploration activities to enable sustained human and robotic exploration of Mars and beyond, including more distant destinations in the solar system. In support of this vision, the Robotic Lunar Exploration Program (RLEP) is expected to execute a series of robotic missions to the Moon, starting in 2008, in order to pave the way for further human space exploration. This paper will give an introduction to the RLEP program office, its role and its goals, and the approach it is taking to executing the charter of the program. The paper will also discuss candidate architectures that are being studied as a framework for defining the RLEP missions and the context in which they will evolve.
A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Obergfell, Klaus
1991-01-01
The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.
Robotic Design Studio: Exploring the Big Ideas of Engineering in a Liberal Arts Environment.
ERIC Educational Resources Information Center
Turbak, Franklyn; Berg, Robbie
2002-01-01
Suggests that it is important to introduce liberal arts students to the essence of engineering. Describes Robotic Design Studio, a course in which students learn how to design, assemble, and program robots made out of LEGO parts, sensors, motors, and small embedded computers. Represents an alternative vision of how robot design can be used to…
Cooperative crossing of traffic intersections in a distributed robot system
NASA Astrophysics Data System (ADS)
Rausch, Alexander; Oswald, Norbert; Levi, Paul
1995-09-01
In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.
Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.
Rumei Zhang; Hao Liu; Jianda Han
2017-07-01
Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.
Modeling of the First Layers in the Fly's Eye
NASA Technical Reports Server (NTRS)
Moya, J. A.; Wilcox, M. J.; Donohoe, G. W.
1997-01-01
Increased autonomy of robots would yield significant advantages in the exploration of space. The shortfalls of computer vision can, however, pose significant limitations on a robot's potential. At the same time, simple insects which are largely hard-wired have effective visual systems. The understanding of insect vision systems thus may lead to improved approaches to visual tasks. A good starting point for the study of a vision system is its eye. In this paper, a model of the sensory portion of the fly's eye is presented. The effectiveness of the model is briefly addressed by a comparison of its performance to experimental data.
The Interdependence of Computers, Robots, and People.
ERIC Educational Resources Information Center
Ludden, Laverne; And Others
Computers and robots are becoming increasingly more advanced, with smaller and cheaper computers now doing jobs once reserved for huge multimillion dollar computers and with robots performing feats such as painting cars and using television cameras to simulate vision as they perform factory tasks. Technicians expect computers to become even more…
Research into the Architecture of CAD Based Robot Vision Systems
1988-02-09
Vision and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge
JPRS Report, Science & Technology, Japan, 4th Intelligent Robots Symposium, Volume 2
1989-03-16
accidents caused by strikes by robots,5 a quantitative model for safety evaluation,6 and evaluations of actual systems7 in order to contribute to...Mobile Robot Position Referencing Using Map-Based Vision Systems.... 160 Safety Evaluation of Man-Robot System 171 Fuzzy Path Pattern of Automatic...camera are made after the robot stops to prevent damage from occurring through obstacle interference. The position of the camera is indicated on the
Intelligent robot control using an adaptive critic with a task control center and dynamic database
NASA Astrophysics Data System (ADS)
Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.
2006-10-01
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.
Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas
2013-08-01
This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.
A subsumptive, hierarchical, and distributed vision-based architecture for smart robotics.
DeSouza, Guilherme N; Kak, Avinash C
2004-10-01
We present a distributed vision-based architecture for smart robotics that is composed of multiple control loops, each with a specialized level of competence. Our architecture is subsumptive and hierarchical, in the sense that each control loop can add to the competence level of the loops below, and in the sense that the loops can present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to determine more precisely the position and orientation of an object in the coordinate frame of the robot. The processing in each module of the control loops is completely independent and it can be performed at its own rate. A control Arbitrator ranks the results of each loop according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the "slowest link," and regarding fault tolerance, since faults in one module does not affect the other modules. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing. The architecture has also been applied to mobile robot navigation and can easily be extended to tasks such as "assembly-on-the-fly."
Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1987-01-01
Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.
Zhong, Xungao; Zhong, Xunyu; Peng, Xiafu
2013-10-08
In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.
Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1989-09-01
The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.
The 3D laser radar vision processor system
NASA Astrophysics Data System (ADS)
Sebok, T. M.
1990-10-01
Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.
The 3D laser radar vision processor system
NASA Technical Reports Server (NTRS)
Sebok, T. M.
1990-01-01
Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.
IMU-Based Online Kinematic Calibration of Robot Manipulator
2013-01-01
Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods. PMID:24302854
Visual Detection and Tracking System for a Spherical Amphibious Robot
Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun
2017-01-01
With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation. PMID:28420134
Visual Detection and Tracking System for a Spherical Amphibious Robot.
Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun
2017-04-15
With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.
Laser assisted robotic surgery in cornea transplantation
NASA Astrophysics Data System (ADS)
Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo
2017-03-01
Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.
A robotic platform for laser welding of corneal tissue
NASA Astrophysics Data System (ADS)
Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo
2017-07-01
Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.
Augmented reality and haptic interfaces for robot-assisted surgery.
Yamamoto, Tomonori; Abolhassani, Niki; Jung, Sung; Okamura, Allison M; Judkins, Timothy N
2012-03-01
Current teleoperated robot-assisted minimally invasive surgical systems do not take full advantage of the potential performance enhancements offered by various forms of haptic feedback to the surgeon. Direct and graphical haptic feedback systems can be integrated with vision and robot control systems in order to provide haptic feedback to improve safety and tissue mechanical property identification. An interoperable interface for teleoperated robot-assisted minimally invasive surgery was developed to provide haptic feedback and augmented visual feedback using three-dimensional (3D) graphical overlays. The software framework consists of control and command software, robot plug-ins, image processing plug-ins and 3D surface reconstructions. The feasibility of the interface was demonstrated in two tasks performed with artificial tissue: palpation to detect hard lumps and surface tracing, using vision-based forbidden-region virtual fixtures to prevent the patient-side manipulator from entering unwanted regions of the workspace. The interoperable interface enables fast development and successful implementation of effective haptic feedback methods in teleoperation. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir
2014-06-01
This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.
Design and Development of a High Speed Sorting System Based on Machine Vision Guiding
NASA Astrophysics Data System (ADS)
Zhang, Wenchang; Mei, Jiangping; Ding, Yabin
In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.
Near real-time stereo vision system
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)
1993-01-01
The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930
Estimation of visual maps with a robot network equipped with vision sensors.
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
The history of robotics in urology.
Challacombe, Ben J; Khan, Mohammad Shamim; Murphy, Declan; Dasgupta, Prokar
2006-06-01
Despite being an ancient surgical specialty, modern urology is technology driven and has been quick to take up new minimally invasive surgical challenges. It is therefore no surprise that much of the early work in the development of surgical robotics was pioneered by urologists. We look at the relatively short history of robotic urology, from the origins of robotics and robotic surgery itself to the rapidly expanding experience with the master-slave devices. This article credits the vision of John Wickham who sowed the seeds of robotic surgery in urology.
Pre-shaping of the Fingertip of Robot Hand Covered with Net Structure Proximity Sensor
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Suzuki, Yosuke; Hasegawa, Hiroaki; Ming, Aiguo; Ishikawa, Masatoshi; Shimojo, Makoto
To achieve skillful tasks with multi-fingered robot hands, many researchers have been working on sensor-based control of them. Vision sensors and tactile sensors are indispensable for the tasks, however, the correctness of the information from the vision sensors decreases as a robot hand approaches to a grasping object because of occlusion. This research aims to achieve seamless detection for reliable grasp by use of proximity sensors: correcting the positional error of the hand in vision-based approach, and contacting the fingertip in the posture for effective tactile sensing. In this paper, we propose a method for adjusting the posture of the fingertip to the surface of the object. The method applies “Net-Structure Proximity Sensor” on the fingertip, which can detect the postural error in the roll and pitch axes between the fingertip and the object surface. The experimental result shows that the postural error is corrected in the both axes even if the object dynamically rotates.
Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge
2011-01-01
This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569
Robotic lunar exploration: Architectures, issues and options
NASA Astrophysics Data System (ADS)
Mankins, John C.; Valerani, Ernesto; Della Torre, Alberto
2007-06-01
The US ‘vision for space exploration’ articulated at the beginning of 2004 encompasses a broad range of human and robotic space missions, including missions to the Moon, Mars and destinations beyond. It establishes clear goals and objectives, yet sets equally clear budgetary ‘boundaries’ by stating firm priorities, including ‘tough choices’ regarding current major NASA programs. The new vision establishes as policy the goals of pursuing commercial and international collaboration in realizing future space exploration missions. Also, the policy envisions that advances in human and robotic mission technologies will play a key role—both as enabling and as a major public benefit that will result from implementing that vision. In pursuing future international space exploration goals, the exploration of the Moon during the coming decades represents a particularly appealing objective. The Moon provides a unique venue for exploration and discovery—including the science of the Moon (e.g., geological studies), science from the Moon (e.g., astronomical observatories), and science on the Moon (including both basic research, such as biological laboratory science, and applied research and development, such as the use of the Moon as a test bed for later exploration). The Moon may also offer long-term opportunties for utilization—including Earth observing applications and commercial developments. During the coming decade, robotic lunar exploration missions will play a particularly important role, both in their own right and as precursors to later, more ambitious human and robotic exploration and development efforts. The following paper discusses some of the issues and opportunities that may arise in establishing plans for future robotic lunar exploration. Particular emphasis is placed on four specific elements of future robotic infrastructure: Earth Moon in-space transportation systems; lunar orbiters; lunar descent and landing systems; and systems for long-range transport on the Moon.
ERIC Educational Resources Information Center
Chen, Kan; Stafford, Frank P.
A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…
Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor
NASA Astrophysics Data System (ADS)
Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick
This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.
Vision based object pose estimation for mobile robots
NASA Technical Reports Server (NTRS)
Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry
1994-01-01
Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.
Robotic lobectomy and segmentectomy for lung cancer: results and operating technique
2015-01-01
Video-assisted thoracic surgery (VATS) is a minimally invasive approach with several advantages over open thoracotomy for the surgery of lung cancer but also some limitations like rigid instruments and suboptimal vision. Robot technology is an evolution of manual videothoracoscopy introduced to overcome these limitations maintaining the advantages related to low invasiveness. More intuitive movements, greater flexibility and high definition three-dimensional vision are advantages of the robotic approach. Different studies demonstrate that robotic lobectomy and segmentectomy are feasible and safe with long term outcome similar to that of open/VATS approaches, however no randomised comparison are available and benefits in terms of quality of life (QOL) and pain need to be demonstrated yet. Several different robotic techniques are currently employed and differ for number of robotic arms (three versus four), the use of CO2 insufflation, timing of utility incision and the port positioning. The four arms robotic approach with anterior utility incision is the technique described by the authors. Indications to perform robotic lung resections may be more extensive than those of traditional videothoracoscpic approach and includes patients with locally advanced disease after chemotherapy or those requiring anatomical segmentectomy. Learning curve of vats and robotic lung resection is similar. High capital and running costs are the most important disadvantages. Entry of competitor companies should drive down costs. PMID:25984357
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
Potato Operation: automatic detection of potato diseases
NASA Astrophysics Data System (ADS)
Lefebvre, Marc; Zimmerman, Thierry; Baur, Charles; Guegerli, Paul; Pun, Thierry
1995-01-01
The Potato Operation is a collaborative, multidisciplinary project in the domain of destructive testing of agricultural products. It aims at automatizing pulp sampling of potatoes in order to detect possible viral diseases. Such viruses can decrease fields productivity by a factor of up to ten. A machine, composed of three conveyor belts, a vision system, a robotic arm and controlled by a PC has been built. Potatoes are brought one by one from a bulk to the vision system, where they are seized by a rotating holding device. The sprouts, where the viral activity is maximum, are then detected by an active vision process operating on multiple views. The 3D coordinates of the sampling point are communicated to the robot arm holding a drill. Some flesh is then sampled by the drill, then deposited into an Elisa plate. After sampling, the robot arm washes the drill in order to prevent any contamination. The PC computer simultaneously controls these processes, the conveying of the potatoes, the vision algorithms and the sampling procedure. The master process, that is the vision procedure, makes use of three methods to achieve the sprouts detection. A profile analysis first locates the sprouts as protuberances. Two frontal analyses, respectively based on fluorescence and local variance, confirm the previous detection and provide the 3D coordinate of the sampling zone. The other two processes work by interruption of the master process.
NASA Astrophysics Data System (ADS)
Cao, Zhengcai; Yin, Longjie; Fu, Yili
2013-01-01
Vision-based pose stabilization of nonholonomic mobile robots has received extensive attention. At present, most of the solutions of the problem do not take the robot dynamics into account in the controller design, so that these controllers are difficult to realize satisfactory control in practical application. Besides, many of the approaches suffer from the initial speed and torque jump which are not practical in the real world. Considering the kinematics and dynamics, a two-stage visual controller for solving the stabilization problem of a mobile robot is presented, applying the integration of adaptive control, sliding-mode control, and neural dynamics. In the first stage, an adaptive kinematic stabilization controller utilized to generate the command of velocity is developed based on Lyapunov theory. In the second stage, adopting the sliding-mode control approach, a dynamic controller with a variable speed function used to reduce the chattering is designed, which is utilized to generate the command of torque to make the actual velocity of the mobile robot asymptotically reach the desired velocity. Furthermore, to handle the speed and torque jump problems, the neural dynamics model is integrated into the above mentioned controllers. The stability of the proposed control system is analyzed by using Lyapunov theory. Finally, the simulation of the control law is implemented in perturbed case, and the results show that the control scheme can solve the stabilization problem effectively. The proposed control law can solve the speed and torque jump problems, overcome external disturbances, and provide a new solution for the vision-based stabilization of the mobile robot.
Vision-based obstacle avoidance
Galbraith, John [Los Alamos, NM
2006-07-18
A method for allowing a robot to avoid objects along a programmed path: first, a field of view for an electronic imager of the robot is established along a path where the electronic imager obtains the object location information within the field of view; second, a population coded control signal is then derived from the object location information and is transmitted to the robot; finally, the robot then responds to the control signal and avoids the detected object.
Multi-Robot FastSLAM for Large Domains
2007-03-01
Derr, D. Fox, A.B. Cremers , Integrating global position estimation and position tracking for mobile robots: The dynamic markov localization approach...Intelligence (AAAI), 2000. 53. Andrew J. Davison and David W. Murray. Simultaneous Localization and Map- Building Using Active Vision. IEEE...Wyeth, Michael Milford and David Prasser. A Modified Particle Filter for Simultaneous Robot Localization and Landmark Tracking in an Indoor
Artificial Intelligence/Robotics Applications to Navy Aircraft Maintenance.
1984-06-01
other automatic machinery such as presses, molding machines , and numerically-controlled machine tools, just as people do. A-36...Robotics Technologies 3 B. Relevant AI Technologies 4 1. Expert Systems 4 2. Automatic Planning 4 3. Natural Language 5 4. Machine Vision...building machines that imitate human behavior. Artificial intelligence is concerned with the functions of the brain, whereas robotics include, in
Computer hardware and software for robotic control
NASA Technical Reports Server (NTRS)
Davis, Virgil Leon
1987-01-01
The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
Examples of design and achievement of vision systems for mobile robotics applications
NASA Astrophysics Data System (ADS)
Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe
2000-10-01
Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.
How to prepare the patient for robotic surgery: before and during the operation.
Lim, Peter C; Kang, Elizabeth
2017-11-01
Robotic surgery in the treatment of gynecologic diseases continues to evolve and has become accepted over the last decade. The advantages of robotic-assisted laparoscopic surgery over conventional laparoscopy are three-dimensional camera vision, superior precision and dexterity with EndoWristed instruments, elimination of operator tremor, and decreased surgeon fatigue. The drawbacks of the technology are bulkiness and lack of tactile feedback. As with other surgical platforms, the limitations of robotic surgery must be understood. Patient selection and the types of surgical procedures that can be performed through the robotic surgical platform are critical to the success of robotic surgery. First, patient selection and the indication for gynecologic disease should be considered. Discussion with the patient regarding the benefits and potential risks of robotic surgery and of complications and alternative treatments is mandatory, followed by patient's signature indicating informed consent. Appropriate preoperative evaluation-including laboratory and imaging tests-and bowel cleansing should be considered depending upon the type of robotic-assisted procedure. Unlike other surgical procedures, robotic surgery is equipment-intensive and requires an appropriate surgical suite to accommodate the patient side cart, the vision system, and the surgeon's console. Surgical personnel must be properly trained with the robotics technology. Several factors must be considered to perform a successful robotic-assisted surgery: the indication and type of surgical procedure, the surgical platform, patient position and the degree of Trendelenburg, proper port placement configuration, and appropriate instrumentation. These factors that must be considered so that patients can be appropriately prepared before and during the operation are described. Copyright © 2017. Published by Elsevier Ltd.
Chiang, Mao-Hsiung; Lin, Hao-Ting
2011-01-01
This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control.
Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments
2016-09-01
yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G
Strategy in the Robotic Age: A Case for Autonomous Warfare
2014-09-01
6. Robots and Robotics The term robot is a loaded word. For many people it conjures a vision of fictional characters from movies like The...released in the early 1930s to review the experiences of WWI, it was censored , and a version modified to maintain the institutional legacies was...apprehensive, and doctrine was non-existent. Today, America is emerging from two wars and subsequently a war-weary public. The United States is a
Implementation of a Multi-Robot Coverage Algorithm on a Two-Dimensional, Grid-Based Environment
2017-06-01
two planar laser range finders with a 180-degree field of view , color camera, vision beacons, and wireless communicator. In their system, the robots...Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF A MULTI -ROBOT COVERAGE ALGORITHM ON A TWO -DIMENSIONAL, GRID-BASED ENVIRONMENT 5. FUNDING NUMBERS...path planning coverage algorithm for a multi -robot system in a two -dimensional, grid-based environment. We assess the applicability of a topology
Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing.
Leonard, Simon; Wu, Kyle L; Kim, Yonjae; Krieger, Axel; Kim, Peter C W
2014-04-01
This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360(°)®, and nine times faster than surgeons using manual laparoscopic tools.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Humanoids for lunar and planetary surface operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier; Csaszar, Ambrus; Gan, Quan; Hidalgo, Timothy; Moore, Jeff; Newton, Jason; Sandoval, Steven; Xu, Jiajing
2005-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans, for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project in this direction is outlined.
Multi-Sensor Person Following in Low-Visibility Scenarios
Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier
2010-01-01
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. PMID:22163506
Multi-sensor person following in low-visibility scenarios.
Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier
2010-01-01
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment.
Object positioning in storages of robotized workcells using LabVIEW Vision
NASA Astrophysics Data System (ADS)
Hryniewicz, P.; Banaś, W.; Sękala, A.; Gwiazda, A.; Foit, K.; Kost, G.
2015-11-01
During the manufacturing process, each performed task is previously developed and adapted to the conditions and the possibilities of the manufacturing plant. The production process is supervised by a team of specialists because any downtime causes great loss of time and hence financial loss. Sensors used in industry for tracking and supervision various stages of a production process make it much easier to maintain it continuous. One of groups of sensors used in industrial applications are non-contact sensors. This group includes: light barriers, optical sensors, rangefinders, vision systems, and ultrasonic sensors. Through to the rapid development of electronics the vision systems were widespread as the most flexible type of non-contact sensors. These systems consist of cameras, devices for data acquisition, devices for data analysis and specialized software. Vision systems work well as sensors that control the production process itself as well as the sensors that control the product quality level. The LabVIEW program as well as the LabVIEW Vision and LabVIEW Builder represent the application that enables program the informatics system intended to process and product quality control. The paper presents elaborated application for positioning elements in a robotized workcell. Basing on geometric parameters of manipulated object or on the basis of previously developed graphical pattern it is possible to determine the position of particular manipulated elements. This application could work in an automatic mode and in real time cooperating with the robot control system. It allows making the workcell functioning more autonomous.
Recent results in visual servoing
NASA Astrophysics Data System (ADS)
Chaumette, François
2008-06-01
Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.
Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L
2016-03-18
Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .
Robotic Sensitive-Site Assessment
2015-09-04
annotations. The SOA component is the backend infrastructure that receives and stores robot-generated and human-input data and serves these data to several...Architecture Server (heading level 2) The SOA server provides the backend infrastructure to receive data from robot situational awareness payloads, to archive...incapacitation or even death. The proper use of PPE is critical to avoiding exposure. However, wearing PPE limits mobility and field of vision, and
Natural Tasking of Robots Based on Human Interaction Cues
2005-06-01
MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching
Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter
NASA Technical Reports Server (NTRS)
Rock, Stephen M.
1999-01-01
This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.
A focused bibliography on robotics
NASA Astrophysics Data System (ADS)
Mergler, H. W.
1983-08-01
The present bibliography focuses on eight robotics-related topics believed by the author to be of special interest to researchers in the field of industrial electronics: robots, sensors, kinematics, dynamics, control systems, actuators, vision, economics, and robot applications. This literature search was conducted through the 1970-present COMPENDEX data base, which provides world-wide coverage of nearly 3500 journals, conference proceedings and reports, and the 1969-1981 INSPEC data base, which is the largest for the English language in the fields of physics, electrotechnology, computers, and control.
How robotic-assisted surgery can decrease the risk of mucosal tear during Heller myotomy procedure?
Ballouhey, Quentin; Dib, Nabil; Binet, Aurélien; Carcauzon-Couvrat, Véronique; Clermidi, Pauline; Longis, Bernard; Lardy, Hubert; Languepin, Jane; Cros, Jérôme; Fourcade, Laurent
2017-06-01
We report the first description of robotic-assisted Heller myotomy in children. The purpose of this study was to improve the safety of Heller myotomy by demonstrating, in two adolescent patients, the contribution of the robot to the different steps of this procedure. Due to the robot's freedom of movement and three-dimensional vision, there was an improvement in the accuracy, a gain in the safety regarding different key-points, decreasing the risk of mucosal perforation associated with this procedure.
Robot-assisted thoracoscopic surgery with simple laparoscopy for diaphragm eventration.
Ahn, Joong Hyun; Suh, Jong Hui; Jeong, Jin Yong
2013-09-01
Robot-assisted thoracoscopic surgery has been applied for general thoracic operations. Its advantages include not only those of minimally invasive surgery but also those of magnified three-dimensional vision and angulation of the robotic arm. However, there are no direct tactile sensation and force feedback, which can cause unwanted organ damage. We therefore used laparoscopy simultaneously to avoid a blind intraperitoneal area during robotic surgery for diaphragmatic eventration via transthoracic approach and describe the technique herein. Georg Thieme Verlag KG Stuttgart · New York.
Robot and Human Surface Operations on Solar System Bodies
NASA Technical Reports Server (NTRS)
Weisbin, C. R.; Easter, R.; Rodriguez, G.
2001-01-01
This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.
Design and control of active vision based mechanisms for intelligent robots
NASA Technical Reports Server (NTRS)
Wu, Liwei; Marefat, Michael M.
1994-01-01
In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.
Machine vision and appearance based learning
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-03-01
Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.
Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system
NASA Astrophysics Data System (ADS)
Hanna, Moheb M.; Buck, A. A.; Smith, R.
1994-10-01
The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.
Design of a Vision-Based Sensor for Autonomous Pig House Cleaning
NASA Astrophysics Data System (ADS)
Braithwaite, Ian; Blanke, Mogens; Zhang, Guo-Qiang; Carstensen, Jens Michael
2005-12-01
Current pig house cleaning procedures are hazardous to the health of farm workers, and yet necessary if the spread of disease between batches of animals is to be satisfactorily controlled. Autonomous cleaning using robot technology offers salient benefits. This paper addresses the feasibility of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas with a low probability of misclassification. A Bayesian discriminator is shown to be efficient in this context and implementation of a prototype tool demonstrates the feasibility of designing a low-cost vision-based sensor for autonomous cleaning.
3D vision upgrade kit for the TALON robot system
NASA Astrophysics Data System (ADS)
Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-02-01
In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.
Development of a teaching system for an industrial robot using stereo vision
NASA Astrophysics Data System (ADS)
Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki
1997-12-01
The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.
Conference on Intelligent Robotics in Field, Factory, Service and Space (CIRFFSS 1994), Volume 2
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1994-01-01
The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservations can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed the following topics: (1) vision systems integration and architecture; (2) selective perception and human robot interaction; (3) robotic systems technology; (4) military and other field applications; (5) dual-use precommercial robotic technology; (6) building operations; (7) planetary exploration applications; (8) planning; (9) new directions in robotics; and (10) commercialization.
3-D Vision Techniques for Autonomous Vehicles
1988-08-01
TITLE (Include Security Classification) W 3-D Vision Techniques for Autonomous Vehicles 12 PERSONAL AUTHOR(S) Martial Hebert, Takeo Kanade, inso Kweoni... Autonomous Vehicles Martial Hebert, Takeo Kanade, Inso Kweon CMU-RI-TR-88-12 The Robotics Institute Carnegie Mellon University Acession For Pittsburgh
Robotics and tele-manipulation: update and perspectives in urology.
Frede, T; Jaspers, J; Hammady, A; Lesch, J; Teber, D; Rassweiler, J
2007-06-01
Robotic surgery in urology has become a reality in the year 2007 with several thousand robotic prostatectomies having been performed already worldwide. Compared to conventional laparoscopy, the process of learning the robotic technique is short and the operative results are comparable to those of conventional laparoscopy or even open surgery. However, there are still some disadvantages with the robotic systems, mainly technical (tactile feedback) and financial (investment and running costs). Alternative and more inexpensive technologies must be considered in order to overcome the difficulties of conventional laparoscopy (instrument handling, degrees of freedom, 3-D vision), while also integrating advantages of the robotic systems.
A trunk ranging system based on binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Xixuan; Kan, Jiangming
2017-07-01
Trunk ranging is an essential function for autonomous forestry robots. Traditional trunk ranging systems based on personal computers are not convenient in practical application. This paper examines the implementation of a trunk ranging system based on the binocular vision theory via TI's DaVinc DM37x system. The system is smaller and more reliable than that implemented using a personal computer. It calculates the three-dimensional information from the images acquired by binocular cameras, producing the targeting and ranging results. The experimental results show that the measurement error is small and the system design is feasible for autonomous forestry robots.
Real-time stereo generation for surgical vision during minimal invasive robotic surgery
NASA Astrophysics Data System (ADS)
Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod
2016-03-01
This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.
Robotics handbook. Version 1: For the interested party and professional
NASA Astrophysics Data System (ADS)
1993-12-01
This publication covers several categories of information about robotics. The first section provides a brief overview of the field of Robotics. The next section provides a reasonably detailed look at the NASA Robotics program. The third section features a listing of companies and organization engaging in robotics or robotic-related activities; followed by a listing of associations involved in the field; followed by a listing of publications and periodicals which cover elements of robotics or related fields. The final section is an abbreviated abstract of referred journal material and other reference material relevant to the technology and science of robotics, including such allied fields as vision perception; three-space axis orientation and measurement systems and associated inertial reference technology and algorithms; and physical and mechanical science and technology related to robotics.
Robotics handbook. Version 1: For the interested party and professional
NASA Technical Reports Server (NTRS)
1993-01-01
This publication covers several categories of information about robotics. The first section provides a brief overview of the field of Robotics. The next section provides a reasonably detailed look at the NASA Robotics program. The third section features a listing of companies and organization engaging in robotics or robotic-related activities; followed by a listing of associations involved in the field; followed by a listing of publications and periodicals which cover elements of robotics or related fields. The final section is an abbreviated abstract of referred journal material and other reference material relevant to the technology and science of robotics, including such allied fields as vision perception; three-space axis orientation and measurement systems and associated inertial reference technology and algorithms; and physical and mechanical science and technology related to robotics.
Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1994-01-01
The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications.
On-line dimensional measurement of small components on the eyeglasses assembly line
NASA Astrophysics Data System (ADS)
Rosati, G.; Boschetti, G.; Biondi, A.; Rossi, A.
2009-03-01
Dimensional measurement of the subassemblies at the beginning of the assembly line is a very crucial process for the eyeglasses industry, since even small manufacturing errors of the components can lead to very visible defects on the final product. For this reason, all subcomponents of the eyeglass are verified before beginning the assembly process either with a 100% inspection or on a statistical basis. Inspection is usually performed by human operators, with high costs and a degree of repeatability which is not always satisfactory. This paper presents a novel on-line measuring system for dimensional verification of small metallic subassemblies for the eyeglasses industry. The machine vision system proposed, which was designed to be used at the beginning of the assembly line, could also be employed in the Statistical Process Control (SPC) by the manufacturer of the subassemblies. The automated system proposed is based on artificial vision, and exploits two CCD cameras and an anthropomorphic robot to inspect and manipulate the subcomponents of the eyeglass. Each component is recognized by the first camera in a quite large workspace, picked up by the robot and placed in the small vision field of the second camera which performs the measurement process. Finally, the part is palletized by the robot. The system can be easily taught by the operator by simply placing the template object in the vision field of the measurement camera (for dimensional data acquisition) and hence by instructing the robot via the Teaching Control Pendant within the vision field of the first camera (for pick-up transformation acquisition). The major problem we dealt with is that the shape and dimensions of the subassemblies can vary in a quite wide range, but different positioning of the same component can look very similar one to another. For this reason, a specific shape recognition procedure was developed. In the paper, the whole system is presented together with first experimental lab results.
Huang, Shouren; Bergström, Niklas; Yamakawa, Yuji; Senoo, Taku; Ishikawa, Masatoshi
2016-01-01
It is traditionally difficult to implement fast and accurate position regulation on an industrial robot in the presence of uncertainties. The uncertain factors can be attributed either to the industrial robot itself (e.g., a mismatch of dynamics, mechanical defects such as backlash, etc.) or to the external environment (e.g., calibration errors, misalignment or perturbations of a workpiece, etc.). This paper proposes a systematic approach to implement high-performance position regulation under uncertainties on a general industrial robot (referred to as the main robot) with minimal or no manual teaching. The method is based on a coarse-to-fine strategy that involves configuring an add-on module for the main robot’s end effector. The add-on module consists of a 1000 Hz vision sensor and a high-speed actuator to compensate for accumulated uncertainties. The main robot only focuses on fast and coarse motion, with its trajectories automatically planned by image information from a static low-cost camera. Fast and accurate peg-and-hole alignment in one dimension was implemented as an application scenario by using a commercial parallel-link robot and an add-on compensation module with one degree of freedom (DoF). Experimental results yielded an almost 100% success rate for fast peg-in-hole manipulation (with regulation accuracy at about 0.1 mm) when the workpiece was randomly placed. PMID:27483274
ERIC Educational Resources Information Center
Doty, Keith L.
1999-01-01
Research on neural networks and hippocampal function demonstrating how mammals construct mental maps and develop navigation strategies is being used to create Intelligent Autonomous Mobile Robots (IAMRs). Such robots are able to recognize landmarks and navigate without "vision." (SK)
Chiang, Mao-Hsiung; Lin, Hao-Ting
2011-01-01
This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control. PMID:22247676
Learning Long-Range Vision for an Offroad Robot
2008-09-01
robot to perceive and navigate in an unstructured natural world is a difficult task. Without learning, navigation systems are short-range and extremely...unsupervised or weakly supervised learning methods are necessary for training general feature representations for natural scenes. The process was...the world looked dark, and Legos when I was weary. iii ABSTRACT Teaching a robot to perceive and navigate in an unstructured natural world is a
1988-06-08
develop a working experi- tal system which could demonstrate dexterous manipulation in a robotic assembly task. Th ,pe of work can generally be divided into...D Raviv discukse the development, implementation, and experimental evaluation tof a new method for the reconstruction of 3D images from 2D vision data...Research supervision by K. Loparo A. "Moving Shadows Methods for Inferring Three Dimensional Surfaces," D. Raviv , Ph.D. Thesis B. "Robotic Adaptive
Humanoids in Support of Lunar and Planetary Surface Operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier
2006-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.
Technology for robotic surface inspection in space
NASA Technical Reports Server (NTRS)
Volpe, Richard; Balaram, J.
1994-01-01
This paper presents on-going research in robotic inspection of space platforms. Three main areas of investigation are discussed: machine vision inspection techniques, an integrated sensor end-effector, and an orbital environment laboratory simulation. Machine vision inspection utilizes automatic comparison of new and reference images to detect on-orbit induced damage such as micrometeorite impacts. The cameras and lighting used for this inspection are housed in a multisensor end-effector, which also contains a suite of sensors for detection of temperature, gas leaks, proximity, and forces. To fully test all of these sensors, a realistic space platform mock-up has been created, complete with visual, temperature, and gas anomalies. Further, changing orbital lighting conditions are effectively mimicked by a robotic solar simulator. In the paper, each of these technology components will be discussed, and experimental results are provided.
Linear Temporal Logic (LTL) Based Monitoring of Smart Manufacturing Systems.
Heddy, Gerald; Huzaifa, Umer; Beling, Peter; Haimes, Yacov; Marvel, Jeremy; Weiss, Brian; LaViers, Amy
2015-01-01
The vision of Smart Manufacturing Systems (SMS) includes collaborative robots that can adapt to a range of scenarios. This vision requires a classification of multiple system behaviors, or sequences of movement, that can achieve the same high-level tasks. Likewise, this vision presents unique challenges regarding the management of environmental variables in concert with discrete, logic-based programming. Overcoming these challenges requires targeted performance and health monitoring of both the logical controller and the physical components of the robotic system. Prognostics and health management (PHM) defines a field of techniques and methods that enable condition-monitoring, diagnostics, and prognostics of physical elements, functional processes, overall systems, etc. PHM is warranted in this effort given that the controller is vulnerable to program changes, which propagate in unexpected ways, logical runtime exceptions, sensor failure, and even bit rot. The physical component's health is affected by the wear and tear experienced by machines constantly in motion. The controller's source of faults is inherently discrete, while the latter occurs in a manner that builds up continuously over time. Such a disconnect poses unique challenges for PHM. This paper presents a robotic monitoring system that captures and resolves this disconnect. This effort leverages supervisory robotic control and model checking with linear temporal logic (LTL), presenting them as a novel monitoring system for PHM. This methodology has been demonstrated in a MATLAB-based simulator for an industry inspired use-case in the context of PHM. Future work will use the methodology to develop adaptive, intelligent control strategies to evenly distribute wear on the joints of the robotic arms, maximizing the life of the system.
Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia
2012-06-01
Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor
Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong
2011-01-01
In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104
Alatise, Mary B; Hancke, Gerhard P
2017-09-21
Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).
Hancke, Gerhard P.
2017-01-01
Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs). PMID:28934102
Steering of an automated vehicle in an unstructured environment
NASA Astrophysics Data System (ADS)
Kanakaraju, Sampath; Shanmugasundaram, Sathish K.; Thyagarajan, Ramesh; Hall, Ernest L.
1999-08-01
The purpose of this paper is to describe a high-level path planning logic, which processes the data from a vision system and an ultrasonic obstacle avoidance system and steers an autonomous mobile robot between obstacles. The test bed was an autonomous root built at University of Cincinnati, and this logic was tested and debugged on this machine. Attempts have already been made to incorporate fuzzy system on a similar robot, and this paper extends them to take advantage of the robot's ZTR capability. Using the integrated vision syste, the vehicle senses its location and orientation. A rotating ultrasonic sensor is used to map the location and size of possible obstacles. With these inputs the fuzzy logic controls the speed and the steering decisions of the robot. With the incorporation of this logic, it has been observed that Bearcat II has been very successful in avoiding obstacles very well. This was achieved in the Ground Robotics Competition conducted by the AUVS in June 1999, where it travelled a distance of 154 feet in a 10ft. wide path ridden with obstacles. This logic proved to be a significant contributing factor in this feat of Bearcat II.
Three degree-of-freedom force feedback control for robotic mating of umbilical lines
NASA Technical Reports Server (NTRS)
Fullmer, R. Rees
1988-01-01
The use of robotic manipulators for the mating and demating of umbilical fuel lines to the Space Shuttle Vehicle prior to launch is investigated. Force feedback control is necessary to minimize the contact forces which develop during mating. The objective is to develop and demonstrate a working robotic force control system. Initial experimental force control tests with an ASEA IRB-90 industrial robot using the system's Adaptive Control capabilities indicated that control stability would by a primary problem. An investigation of the ASEA system showed a 0.280 second software delay between force input commands and the output of command voltages to the servo system. This computational delay was identified as the primary cause of the instability. Tests on a second path into the ASEA's control computer using the MicroVax II supervisory computer show that time delay would be comparable, offering no stability improvement. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servosystem directly, allowing the robot to use force feedback control while in rigid contact with a moving three-degree-of-freedom target. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servo system directly. This method allowed the robot to use force feedback control while in rigid contact with moving three degree-of-freedom target. Tests on this approach indicated adequate force feedback control even under worst case conditions. A strategy to digitally-controlled vision system was developed. This requires switching between the digital controller when using vision control and the analog controller when using force control, depending on whether or not the mating plates are in contact.
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
Utilizing Robot Operating System (ROS) in Robot Vision and Control
2015-09-01
actually feel more comfortable with the black screen and white letters now. I would also like to thank James Calusdian for his tireless efforts in...originally designed by Willow Garage and currently maintained by the Open Source Robotics Foundation, is a powerful tool because it utilizes object...Visualization The Rviz package, developed by Willow Garage, comes standard with ROS and is a powerful visualization tool that allows users to visualize
A posthuman liturgy? Virtual worlds, robotics, and human flourishing.
Shatzer, Jacob
2013-01-01
In order to inspire a vision of biotechnology that affirms human dignity and human flourishing, the author poses questions about virtual reality and the use of robotics in health care. Using the concept of 'liturgy' and an anthropology of humans as lovers, the author explores how virtual reality and robotics in health care shape human moral agents, and how such shaping could influence the way we do or do not pursue a 'posthuman' future.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.
Insect vision: a few tricks to regulate flight altitude.
Floreano, Dario; Zufferey, Jean-Christophe
2010-10-12
A recent study sheds new light on the visual cues used by Drosophila to regulate flight altitude. The striking similarity with previously identified steering mechanisms provides a coherent basis for novel models of vision-based flight control in insects and robots. Copyright © 2010 Elsevier Ltd. All rights reserved.
From wheels to wings with evolutionary spiking circuits.
Floreano, Dario; Zufferey, Jean-Christophe; Nicoud, Jean-Daniel
2005-01-01
We give an overview of the EPFL indoor flying project, whose goal is to evolve neural controllers for autonomous, adaptive, indoor micro-flyers. Indoor flight is still a challenge because it requires miniaturization, energy efficiency, and control of nonlinear flight dynamics. This ongoing project consists of developing a flying, vision-based micro-robot, a bio-inspired controller composed of adaptive spiking neurons directly mapped into digital microcontrollers, and a method to evolve such a neural controller without human intervention. This article describes the motivation and methodology used to reach our goal as well as the results of a number of preliminary experiments on vision-based wheeled and flying robots.
The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads
NASA Technical Reports Server (NTRS)
DiPaolo, Daniel
2003-01-01
The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.
Intelligent manipulation technique for multi-branch robotic systems
NASA Technical Reports Server (NTRS)
Chen, Alexander Y. K.; Chen, Eugene Y. S.
1990-01-01
New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.
User-centric design of a personal assistance robot (FRASIER) for active aging.
Padir, Taşkin; Skorinko, Jeanine; Dimitrov, Velin
2015-01-01
We present our preliminary results from the design process for developing the Worcester Polytechnic Institute's personal assistance robot, FRASIER, as an intelligent service robot for enabling active aging. The robot capabilities include vision-based object detection, tracking the user and help with carrying heavy items such as grocery bags or cafeteria trays. This work-in-progress report outlines our motivation and approach to developing the next generation of service robots for the elderly. Our main contribution in this paper is the development of a set of specifications based on the adopted user-centered design process, and realization of the prototype system designed to meet these specifications.
NASA Astrophysics Data System (ADS)
Madokoro, H.; Tsukada, M.; Sato, K.
2013-07-01
This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.
Semiautonomous teleoperation system with vision guidance
NASA Astrophysics Data System (ADS)
Yu, Wai; Pretlove, John R. G.
1998-12-01
This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
NASA Astrophysics Data System (ADS)
Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki
We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.
On the Use of a Low-Cost Thermal Sensor to Improve Kinect People Detection in a Mobile Robot
Susperregi, Loreto; Sierra, Basilio; Castrillón, Modesto; Lorenzo, Javier; Martínez-Otzeta, Jose María; Lazkano, Elena
2013-01-01
Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot or not. Our approach makes use of vision, depth and thermal sensors mounted on top of a mobile platform. The set of sensors is set up combining the rich data source offered by a Kinect sensor, which provides vision and depth at low cost, and a thermopile array sensor. Experimental results carried out with a mobile platform in a manufacturing shop floor and in a science museum have shown that the false positive rate achieved using any single cue is drastically reduced. The performance of our algorithm improves other well-known approaches, such as C4 and histogram of oriented gradients (HOG). PMID:24172285
Grounding Robot Autonomy in Emotion and Self-awareness
NASA Astrophysics Data System (ADS)
Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita
Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.
1994-06-01
signals. Industrial robot controllers have several general purpose ports which can be programmed within manipulator program. In this way the gen ri...well as a fanc - tional end- effector was developed and evaluated. The workcell was found technologically feasible; however, further experimental work
Optical Flow-Based State Estimation for Guided Projectiles
2015-06-01
Computer Vision and Image Understanding. 2012;116(5):606–633. 3. Corke P, Lobo J, Dias J. An introduction to inertial and visual sensing. The...International Journal of Robotics Research. 2007;26(6):519–535. 4. Hutchinson S, Hager GD, Corke PI. A tutorial on visual servo control. Robotics and
2018-01-01
Although the use of the surgical robot is rapidly expanding for various medical treatments, there still exist safety issues and concerns about robot-assisted surgeries due to limited vision through a laparoscope, which may cause compromised situation awareness and surgical errors requiring rapid emergency conversion to open surgery. To assist surgeon's situation awareness and preventive emergency response, this study proposes situation information guidance through a vision-based common algorithm architecture for automatic detection and tracking of intraoperative hemorrhage and surgical instruments. The proposed common architecture comprises the location of the object of interest using feature texture, morphological information, and the tracking of the object based on Kalman filter for robustness with reduced error. The average recall and precision of the instrument detection in four prostate surgery videos were 96% and 86%, and the accuracy of the hemorrhage detection in two prostate surgery videos was 98%. Results demonstrate the robustness of the automatic intraoperative object detection and tracking which can be used to enhance the surgeon's preventive state recognition during robot-assisted surgery. PMID:29854366
Video rate color region segmentation for mobile robotic applications
NASA Astrophysics Data System (ADS)
de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline
2005-08-01
Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.
Stereo vision tracking of multiple objects in complex indoor environments.
Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro
2010-01-01
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.
Linear Temporal Logic (LTL) Based Monitoring of Smart Manufacturing Systems
Heddy, Gerald; Huzaifa, Umer; Beling, Peter; Haimes, Yacov; Marvel, Jeremy; Weiss, Brian; LaViers, Amy
2017-01-01
The vision of Smart Manufacturing Systems (SMS) includes collaborative robots that can adapt to a range of scenarios. This vision requires a classification of multiple system behaviors, or sequences of movement, that can achieve the same high-level tasks. Likewise, this vision presents unique challenges regarding the management of environmental variables in concert with discrete, logic-based programming. Overcoming these challenges requires targeted performance and health monitoring of both the logical controller and the physical components of the robotic system. Prognostics and health management (PHM) defines a field of techniques and methods that enable condition-monitoring, diagnostics, and prognostics of physical elements, functional processes, overall systems, etc. PHM is warranted in this effort given that the controller is vulnerable to program changes, which propagate in unexpected ways, logical runtime exceptions, sensor failure, and even bit rot. The physical component’s health is affected by the wear and tear experienced by machines constantly in motion. The controller’s source of faults is inherently discrete, while the latter occurs in a manner that builds up continuously over time. Such a disconnect poses unique challenges for PHM. This paper presents a robotic monitoring system that captures and resolves this disconnect. This effort leverages supervisory robotic control and model checking with linear temporal logic (LTL), presenting them as a novel monitoring system for PHM. This methodology has been demonstrated in a MATLAB-based simulator for an industry inspired use-case in the context of PHM. Future work will use the methodology to develop adaptive, intelligent control strategies to evenly distribute wear on the joints of the robotic arms, maximizing the life of the system. PMID:28730154
Detection of oranges from a color image of an orange tree
NASA Astrophysics Data System (ADS)
Weeks, Arthur R.; Gallagher, A.; Eriksson, J.
1999-10-01
The progress of robotic and machine vision technology has increased the demand for sophisticated methods for performing automatic harvesting of fruit. The harvesting of fruit, until recently, has been performed manually and is quite labor intensive. An automatic robot harvesting system that uses machine vision to locate and extract the fruit would free the agricultural industry from the ups and downs of the labor market. The environment in which robotic fruit harvesters must work presents many challenges due to the inherent variability from one location to the next. This paper takes a step towards this goal by outlining a machine vision algorithm that detects and accurately locates oranges from a color image of an orange tree. Previous work in this area has focused on differentiating the orange regions from the rest of the picture and not locating the actual oranges themselves. Failure to locate the oranges, however, leads to a reduced number of successful pick attempts. This paper presents a new approach for orange region segmentation in which the circumference of the individual oranges as well as partially occluded oranges are located. Accurately defining the circumference of each orange allows a robotic harvester to cut the stem of the orange by either scanning the top of the orange with a laser or by directing a robotic arm towards the stem to automatically cut it. A modified version of the K- means algorithm is used to initially segment the oranges from the canopy of the orange tree. Morphological processing is then used to locate occluded oranges and an iterative circle finding algorithm is used to define the circumference of the segmented oranges.
Data acquisition and analysis of range-finding systems for spacing construction
NASA Technical Reports Server (NTRS)
Shen, C. N.
1981-01-01
For space missions of future, completely autonomous robotic machines will be required to free astronauts from routine chores of equipment maintenance, servicing of faulty systems, etc. and to extend human capabilities in hazardous environments full of cosmic and other harmful radiations. In places of high radiation and uncontrollable ambient illuminations, T.V. camera based vision systems cannot work effectively. However, a vision system utilizing directly measured range information with a time of flight laser rangefinder, can successfully operate under these environments. Such a system will be independent of proper illumination conditions and the interfering effects of intense radiation of all kinds will be eliminated by the tuned input of the laser instrument. Processing the range data according to certain decision, stochastic estimation and heuristic schemes, the laser based vision system will recognize known objects and thus provide sufficient information to the robot's control system which can develop strategies for various objectives.
Identification and location of catenary insulator in complex background based on machine vision
NASA Astrophysics Data System (ADS)
Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao
2018-04-01
It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.
A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033
NASA Technical Reports Server (NTRS)
Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.
2012-01-01
The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
Modular robotic assembly of small devices.
Frauenfelder, M
2000-01-01
The use of robots for the automatic assembly of devices of up to 100 x 100 x 100 mm is relatively uncommon today. Insufficient return on investment and the long lead times that are required have been limiting factors. Innovations in vision technology have led to the development of robotic assembly systems that employ flexible part-feeding. The benefits of these systems are described, which suggest that better ratios of price to productivity and deployment times are now achievable.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P.
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-01-01
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-11-20
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation.
Using robots to help people habituate to visible disabilities.
Riek, Laurel D; Robinson, Peter
2011-01-01
We explore a new way of using robots as human-human social facilitators: inter-ability communication. This refers to communication between people with disabilities and those without disabilities. We have interviewed people with head and facial movement disorders (n = 4), and, using a vision-based approach, recreated their movements on our 27 degree-of-freedom android robot. We then conducted an exploratory experiment (n = 26) to see if the robot might serve as a suitable tool to allow people to practice inter-ability interaction on a robot before doing it with a person. Our results suggest a robot may be useful in this manner. Furthermore, we have found a significant relationship between people who hold negative attitudes toward robots and negative attitudes toward people with disabilities. © 2011 IEEE
Autonomous Mobile Platform for Research in Cooperative Robotics
NASA Technical Reports Server (NTRS)
Daemi, Ali; Pena, Edward; Ferguson, Paul
1998-01-01
This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.
Low computation vision-based navigation for a Martian rover
NASA Technical Reports Server (NTRS)
Gavin, Andrew S.; Brooks, Rodney A.
1994-01-01
Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.
NASA Astrophysics Data System (ADS)
Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling
2017-09-01
In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters
Towards Supervising Remote Dexterous Robots Across Time Delay
NASA Technical Reports Server (NTRS)
Hambuchen, Kimberly; Bluethmann, William; Goza, Michael; Ambrose, Robert; Wheeler, Kevin; Rabe, Ken
2006-01-01
The President s Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling dexterous robots under intermediate time delay is presented, in which software running within a ground control cockpit predicts the intention of an immersed robot supervisor, then the remote robot autonomously executes the supervisor s intended tasks. Initial results are presented.
Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation
NASA Technical Reports Server (NTRS)
Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri
2002-01-01
The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.
[RESEARCH PROGRESS OF PERIPHERAL NERVE SURGERY ASSISTED BY Da Vinci ROBOTIC SYSTEM].
Shen, Jie; Song, Diyu; Wang, Xiaoyu; Wang, Changjiang; Zhang, Shuming
2016-02-01
To summarize the research progress of peripheral nerve surgery assisted by Da Vinci robotic system. The recent domestic and international articles about peripheral nerve surgery assisted by Da Vinci robotic system were reviewed and summarized. Compared with conventional microsurgery, peripheral nerve surgery assisted by Da Vinci robotic system has distinctive advantages, such as elimination of physiological tremors and three-dimensional high-resolution vision. It is possible to perform robot assisted limb nerve surgery using either the traditional brachial plexus approach or the mini-invasive approach. The development of Da Vinci robotic system has revealed new perspectives in peripheral nerve surgery. But it has still been at the initial stage, more basic and clinical researches are still needed.
Self-localization for an autonomous mobile robot based on an omni-directional vision system
NASA Astrophysics Data System (ADS)
Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin
2013-12-01
In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.
Vision Sensor-Based Road Detection for Field Robot Navigation
Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen
2015-01-01
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514
Kang, Chang Moo; Chi, Hoon Sang; Hyeung, Woo Jin; Kim, Kyung Sik; Choi, Jin Sub; Kim, Byong Ro
2007-01-01
With the advancement of laparoscopic instruments and computer sciences, complex surgical procedures are expected to be safely performed by robot assisted telemanipulative laparoscopic surgery. The da Vinci system (Intuitive Surgical, Mountain View, CA, USA) became available at the many surgical fields. The wrist like movements of the instrument's tip, as well as 3-dimensional vision, could be expected to facilitate more complex laparoscopic procedure. Here, we present the first Korean experience of da Vinci robotic assisted laparoscopic cholecystectomy and discuss the introduction and perspectives of this robotic system. PMID:17594166
Manning, Todd G; Papa, Nathan; Perera, Marlon; McGrath, Shannon; Christidis, Daniel; Khan, Munad; O'Beirne, Richard; Campbell, Nicholas; Bolton, Damien; Lawrentschuk, Nathan
2018-03-01
Laparoscopic lens fogging (LLF) hampers vision and impedes operative efficiency. Attempts to reduce LLF have led to the development of various anti-fogging fluids and warming devices. Limited literature exists directly comparing these techniques. We constructed a model peritoneum to simulate LLF and to compare the efficacy of various anti-fogging techniques. Intraperitoneal space was simulated using a suction bag suspended within an 8 L container of water. LLF was induced by varying the temperature and humidity within the model peritoneum. Various anti-fogging techniques were assessed including scope warmers, FRED TM , Resoclear TM , chlorhexidine, betadine and immersion in heated saline. These products were trialled with and without the use of a disposable scope warmer. Vision scores were evaluated by the same investigator for all tests and rated according to a predetermined scale. Fogging was assessed for each product or technique 30 times and a mean vision rating was recorded. All products tested imparted some benefit, but FRED TM performed better than all other techniques. Betadine and Resoclear TM performed no better than the use of a scope warmer alone. Immersion in saline prior to insertion resulted in decreased vision ratings. The robotic scope did not result in LLF within the model. In standard laparoscopes, the most superior preventative measure was FRED TM utilised on a pre-warmed scope. Despite improvements in LLF with other products FRED TM was better than all other techniques. The robotic laparoscope performed superiorly regarding LLF compared to standard laparoscope.
Concept and design philosophy of a person-accompanying robot
NASA Astrophysics Data System (ADS)
Mizoguchi, Hiroshi; Shigehara, Takaomi; Goto, Yoshiyasu; Hidai, Ken-ichi; Mishima, Taketoshi
1999-01-01
This paper proposes a person accompanying robot as a novel human collaborative robot. The person accompanying robot is such legged mobile robot that is possible to follow the person utilizing its vision. towards future aging society, human collaboration and human support are required as novel applications of robots. Such human collaborative robots share the same space with humans. But conventional robots are isolated from humans and lack the capability to observe humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. To collaborate and support humans properly human collaborative robot must have capability to observe and recognize humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. The authors are currently implementing a prototype of the proposed accompanying robot.As a base for the human observing function of the prototype robot, we have realized face tracking utilizing skin color extraction and correlation based tracking. We also develop a method for the robot to pick up human voice clearly and remotely by utilizing microphone arrays. Results of these preliminary study suggest feasibility of the proposed robot.
NASA Astrophysics Data System (ADS)
Shatravin, V.; Shashev, D. V.
2018-05-01
Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.
Robots Would Couple And Uncouple Fluid And Electrical Lines
NASA Technical Reports Server (NTRS)
Del Castillo, Eduardo Lopez; Davis, Virgil; Ferguson, Bob; Reichle, Garland
1992-01-01
Robots make and break connections between umbilical plates and mating connectors on rockets about to be launched. Sensing and control systems include vision, force, and torque subsystems. Enhances safety by making it possible to couple and uncouple umbilical plates quickly, without exposing human technicians to hazards of leaking fuels and oxidizers. Significantly reduces time spent to manually connect umbilicals. Robots based on similar principles used in refueling of National AeroSpace Plane (NASP) and satellites and orbital transfer vehicles in space.
Mobile robot exploration and navigation of indoor spaces using sonar and vision
NASA Technical Reports Server (NTRS)
Kortenkamp, David; Huber, Marcus; Koss, Frank; Belding, William; Lee, Jaeho; Wu, Annie; Bidlack, Clint; Rodgers, Seth
1994-01-01
Integration of skills into an autonomous robot that performs a complex task is described. Time constraints prevented complete integration of all the described skills. The biggest problem was tuning the sensor-based region-finding algorithm to the environment involved. Since localization depended on matching regions found with the a priori map, the robot became lost very quickly. If the low level sensing of the world is not working, then high level reasoning or map making will be unsuccessful.
Leader/Follower Behaviour Using the SIFT Algorithm for Object Recognition
2006-06-01
opérations de convoiement plus complexes qui utiliseraient une vision artificielle basée sur la détection d’un chef. Les travaux futurs : Étant donné la...Systems: A Virtual Trailer Link Model, In Proceedings of IEEE/RSJ Conference on Intelligent Robots and Systems. [4] Hong, P., Sahli, H., Colon, E., and... Intelligent Robots and Systems. [6] Nguyen, H., Kogut, G., Barua, R., and Burmeister, A. (2004), A Segway RMP-based Robotic Transport System, In In
ERIC Educational Resources Information Center
Foulds, Richard, Ed.
The monograph is a collection of papers on the role of robotics in rehabilitation. The first four papers represent contributions from other countries: "Spartacus and Manus: Telethesis Developments in France and the Netherlands" (H. Kwee); "A Potential Application in Early Education and a Possible Role for a Vision System in a Workstation Based…
Teaching an Old Robot New Tricks: Learning Novel Tasks via Interaction with People and Things
2003-06-01
visions behind the Cog Project were to build a "robot baby ", which could interact with people and objects, imitate the motions of its teachers, and even...though. A very elaborate animatronic motor controller can produce very life-like canned motion, although the controller itself bears little resemblance
Developing operation algorithms for vision subsystems in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Shikhman, M. V.; Shidlovskiy, S. V.
2018-05-01
The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.
Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
Experimental Semiautonomous Vehicle
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Mishkin, Andrew H.; Litwin, Todd E.; Matthies, Larry H.; Cooper, Brian K.; Nguyen, Tam T.; Gat, Erann; Gennery, Donald B.; Firby, Robert J.; Miller, David P.;
1993-01-01
Semiautonomous rover vehicle serves as testbed for evaluation of navigation and obstacle-avoidance techniques. Designed to traverse variety of terrains. Concepts developed applicable to robots for service in dangerous environments as well as to robots for exploration of remote planets. Called Robby, vehicle 4 m long and 2 m wide, with six 1-m-diameter wheels. Mass of 1,200 kg and surmounts obstacles as large as 1 1/2 m. Optimized for development of machine-vision-based strategies and equipped with complement of vision and direction sensors and image-processing computers. Front and rear cabs steer and roll with respect to centerline of vehicle. Vehicle also pivots about central axle, so wheels comply with almost any terrain.
Localization of Mobile Robots Using Odometry and an External Vision Sensor
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318
Localization of mobile robots using odometry and an external vision sensor.
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.
Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B
2013-01-01
The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P < 0.001). However, there was no significant difference in the maximum force applied by the novices to the mitral valve during suturing (P = 0.7) and suture tying (P = 0.6) using either 2D or 3D visualization. The mean time required and forces applied by both the experts and the novices were significantly less using the conventional surgical technique than when using the robotic system with either 2D or 3D vision (P < 0.001). Despite high-quality binocular images, both the experts and the novices applied significantly more force to the cardiac tissue during 3D robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.
Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping
NASA Astrophysics Data System (ADS)
Ignakov, Dmitri
A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.
Development and validation of a low-cost mobile robotics testbed
NASA Astrophysics Data System (ADS)
Johnson, Michael; Hayes, Martin J.
2012-03-01
This paper considers the design, construction and validation of a low-cost experimental robotic testbed, which allows for the localisation and tracking of multiple robotic agents in real time. The testbed system is suitable for research and education in a range of different mobile robotic applications, for validating theoretical as well as practical research work in the field of digital control, mobile robotics, graphical programming and video tracking systems. It provides a reconfigurable floor space for mobile robotic agents to operate within, while tracking the position of multiple agents in real-time using the overhead vision system. The overall system provides a highly cost-effective solution to the topical problem of providing students with practical robotics experience within severe budget constraints. Several problems encountered in the design and development of the mobile robotic testbed and associated tracking system, such as radial lens distortion and the selection of robot identifier templates are clearly addressed. The testbed performance is quantified and several experiments involving LEGO Mindstorm NXT and Merlin System MiaBot robots are discussed.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept. PMID:29593521
Telerobotic controller development
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, Ken; Rhoades, Don
1987-01-01
To meet NASA's space station's needs and growth, a modular and generic approach to robotic control which provides near-term implementation with low development cost and capability for growth into more autonomous systems was developed. The method uses a vision based robotic controller and compliant hand integrated with the Remote Manipulator System arm on the Orbiter. A description of the hardware and its system integration is presented.
NASA Astrophysics Data System (ADS)
Xu, Weidong; Lei, Zhu; Yuan, Zhang; Gao, Zhenqing
2018-03-01
The application of visual recognition technology in industrial robot crawling and placing operation is one of the key tasks in the field of robot research. In order to improve the efficiency and intelligence of the material sorting in the production line, especially to realize the sorting of the scattered items, the robot target recognition and positioning crawling platform based on binocular vision is researched and developed. The images were collected by binocular camera, and the images were pretreated. Harris operator was used to identify the corners of the images. The Canny operator was used to identify the images. Hough-chain code recognition was used to identify the images. The target image in the image, obtain the coordinates of each vertex of the image, calculate the spatial position and posture of the target item, and determine the information needed to capture the movement and transmit it to the robot control crawling operation. Finally, In this paper, we use this method to experiment the wrapping problem in the express sorting process The experimental results show that the platform can effectively solve the problem of sorting of loose parts, so as to achieve the purpose of efficient and intelligent sorting.
Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor
Delbruck, Tobi; Lang, Manuel
2013-01-01
Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most “threatening” ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided1. PMID:24311999
Robotic vision techniques for space operations
NASA Technical Reports Server (NTRS)
Krishen, Kumar
1994-01-01
Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.
Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network
2015-01-01
For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system. PMID:26089863
The role of robotics in computer controlled polishing of large and small optics
NASA Astrophysics Data System (ADS)
Walker, David; Dunn, Christina; Yu, Guoyu; Bibby, Matt; Zheng, Xiao; Wu, Hsing Yu; Li, Hongyu; Lu, Chunlian
2015-08-01
Following formal acceptance by ESO of three 1.4m hexagonal off-axis prototype mirror segments, one circular segment, and certification of our optical test facility, we turn our attention to the challenge of segment mass-production. In this paper, we focus on the role of industrial robots, highlighting complementarity with Zeeko CNC polishing machines, and presenting results using robots to provide intermediate processing between CNC grinding and polishing. We also describe the marriage of robots and Zeeko machines to automate currently manual operations; steps towards our ultimate vision of fully autonomous manufacturing cells, with impact throughout the optical manufacturing community and beyond.
Improving Robotic Assembly of Planar High Energy Density Targets
NASA Astrophysics Data System (ADS)
Dudt, D.; Carlson, L.; Alexander, N.; Boehm, K.
2016-10-01
Increased quantities of planar assemblies for high energy density targets are needed with higher shot rates being implemented at facilities such as the National Ignition Facility and the Matter in Extreme Conditions station of the Linac Coherent Light Source. To meet this growing demand, robotics are used to reduce assembly time. This project studies how machine vision and force feedback systems can be used to improve the quantity and quality of planar target assemblies. Vision-guided robotics can identify and locate parts, reducing laborious manual loading of parts into precision pallets and associated teaching of locations. On-board automated inspection can measure part pickup offsets to correct part drop-off placement into target assemblies. Force feedback systems can detect pickup locations and apply consistent force to produce more uniform glue bond thickness, thus improving the performance of the targets. System designs and performance evaluations will be presented. Work supported in part by the US DOE under the Science Undergraduate Laboratory Internships Program (SULI) and ICF Target Fabrication DE-NA0001808.
The 3D model control of image processing
NASA Technical Reports Server (NTRS)
Nguyen, An H.; Stark, Lawrence
1989-01-01
Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.
Experiences with a Barista Robot, FusionBot
NASA Astrophysics Data System (ADS)
Limbu, Dilip Kumar; Tan, Yeow Kee; Wong, Chern Yuen; Jiang, Ridong; Wu, Hengxin; Li, Liyuan; Kah, Eng Hoe; Yu, Xinguo; Li, Dong; Li, Haizhou
In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.
ERIC Educational Resources Information Center
Edelson, Edward
1980-01-01
Described are the historical uses and research involving the discipline of artifical intelligence. Topics discussed include: symbol manipulation; knowledge engineering; cognitive modeling; and language, vision and robotics. (Author/DS)
Sabanović, Selma
2014-06-01
Using interviews, participant observation, and published documents, this article analyzes the co-construction of robotics and culture in Japan through the technical discourse and practices of robotics researchers. Three cases from current robotics research--the seal-like robot PARO, the Humanoid Robotics Project HRP-2 humanoid, and 'kansei robotics' - show the different ways in which scientists invoke culture to provide epistemological grounding and possibilities for social acceptance of their work. These examples show how the production and consumption of social robotic technologies are associated with traditional crafts and values, how roboticists negotiate among social, technical, and cultural constraints while designing robots, and how humans and robots are constructed as cultural subjects in social robotics discourse. The conceptual focus is on the repeated assembly of cultural models of social behavior, organization, cognition, and technology through roboticists' narratives about the development of advanced robotic technologies. This article provides a picture of robotics as the dynamic construction of technology and culture and concludes with a discussion of the limits and possibilities of this vision in promoting a culturally situated understanding of technology and a multicultural view of science.
The use of multisensor data for robotic applications
NASA Technical Reports Server (NTRS)
Abidi, M. A.; Gonzalez, R. C.
1990-01-01
The feasibility of realistic autonomous space manipulation tasks using multisensory information is shown through two experiments involving a fluid interchange system and a module interchange system. In both cases, autonomous location of the mating element, autonomous location of the guiding light target, mating, and demating of the system were performed. Specifically, vision-driven techniques were implemented to determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. The robotic system was also equipped with a force/torque sensor that continuously monitored the six components of force and torque exerted on the end effector. Using vision, force, torque, proximity, and touch sensors, the two experiments were completed successfully and autonomously.
A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Mcfadyen, Aaron; Mejias, Luis
2016-01-01
This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.
Real-time tracking using stereo and motion: Visual perception for space robotics
NASA Technical Reports Server (NTRS)
Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann
1994-01-01
The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.
Real time AI expert system for robotic applications
NASA Technical Reports Server (NTRS)
Follin, John F.
1987-01-01
A computer controlled multi-robot process cell to demonstrate advanced technologies for the demilitarization of obsolete chemical munitions was developed. The methods through which the vision system and other sensory inputs were used by the artificial intelligence to provide the information required to direct the robots to complete the desired task are discussed. The mechanisms that the expert system uses to solve problems (goals), the different rule data base, and the methods for adapting this control system to any device that can be controlled or programmed through a high level computer interface are discussed.
NASA Astrophysics Data System (ADS)
Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki
2011-12-01
This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.
2013-10-18
low cost robot testbed. 15. SUBJECT TERMS Bio-inspired trajectory generation, in-situ obstacle avoidance, low-cost LEGO robots, vision- based...will not affect the solution optimality and thus will be regarded as zero. Following the LP motion strategy Eq. (1), the position vector of the Lego ...Lobatto (LGL) method [14], the position of Lego robot can be further represented as ’ 1 ,( )j p jD ζ ζ (6) in which ,0 ,,..., T j j j
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.
Vision Algorithms Catch Defects in Screen Displays
NASA Technical Reports Server (NTRS)
2014-01-01
Andrew Watson, a senior scientist at Ames Research Center, developed a tool called the Spatial Standard Observer (SSO), which models human vision for use in robotic applications. Redmond, Washington-based Radiant Zemax LLC licensed the technology from NASA and combined it with its imaging colorimeter system, creating a powerful tool that high-volume manufacturers of flat-panel displays use to catch defects in screens.
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.
A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G
2015-02-01
Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.
Motion and Emotional Behavior Design for Pet Robot Dog
NASA Astrophysics Data System (ADS)
Cheng, Chi-Tai; Yang, Yu-Ting; Miao, Shih-Heng; Wong, Ching-Chang
A pet robot dog with two ears, one mouth, one facial expression plane, and one vision system is designed and implemented so that it can do some emotional behaviors. Three processors (Inter® Pentium® M 1.0 GHz, an 8-bit processer 8051, and embedded soft-core processer NIOS) are used to control the robot. One camera, one power detector, four touch sensors, and one temperature detector are used to obtain the information of the environment. The designed robot with 20 DOF (degrees of freedom) is able to accomplish the walking motion. A behavior system is built on the implemented pet robot so that it is able to choose a suitable behavior for different environmental situation. From the practical test, we can see that the implemented pet robot dog can do some emotional interaction with the human.
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
Robotic Exploration of Moon and Mars: Thematic Education Approach
NASA Technical Reports Server (NTRS)
Allen, J S.; Tobola, K. W.; Lowes, L. L.; Betrue, R.
2008-01-01
Safe, sustained, affordable human and robotic exploration of the Moon, Mars, and beyond is a major NASA goal. Robotic exploration of the Moon and Mars will help pave the way for an expanded human presence in our solar system. To help share the robotic exploration role in the Vision for Space Exploration with classrooms, informal education groups, and the public, our team researched and consolidated the thematic story components and associated education activities into a useful education materials set for educators. We developed the set of materials for a workshop combining NASA Science Mission Directorate and Exploration Systems Mission Directorate engineering, science, and technology to train informal educators on education activities that support the robotic exploration themes. A major focus is on the use of robotic spacecraft and instruments to explore and prepare for the human exploration of the Moon and Mars.
Contextualising and Analysing Planetary Rover Image Products through the Web-Based PRoGIS
NASA Astrophysics Data System (ADS)
Morley, Jeremy; Sprinks, James; Muller, Jan-Peter; Tao, Yu; Paar, Gerhard; Huber, Ben; Bauer, Arnold; Willner, Konrad; Traxler, Christoph; Garov, Andrey; Karachevtseva, Irina
2014-05-01
The international planetary science community has launched, landed and operated dozens of human and robotic missions to the planets and the Moon. They have collected various surface imagery that has only been partially utilized for further scientific purposes. The FP7 project PRoViDE (Planetary Robotics Vision Data Exploitation) is assembling a major portion of the imaging data gathered so far from planetary surface missions into a unique database, bringing them into a spatial context and providing access to a complete set of 3D vision products. Processing is complemented by a multi-resolution visualization engine that combines various levels of detail for a seamless and immersive real-time access to dynamically rendered 3D scenes. PRoViDE aims to (1) complete relevant 3D vision processing of planetary surface missions, such as Surveyor, Viking, Pathfinder, MER, MSL, Phoenix, Huygens, and Lunar ground-level imagery from Apollo, Russian Lunokhod and selected Luna missions, (2) provide highest resolution & accuracy remote sensing (orbital) vision data processing results for these sites to embed the robotic imagery and its products into spatial planetary context, (3) collect 3D Vision processing and remote sensing products within a single coherent spatial data base, (4) realise seamless fusion between orbital and ground vision data, (5) demonstrate the potential of planetary surface vision data by maximising image quality visualisation in 3D publishing platform, (6) collect and formulate use cases for novel scientific application scenarios exploiting the newly introduced spatial relationships and presentation, (7) demonstrate the concepts for MSL, (9) realize on-line dissemination of key data & its presentation by a web-based GIS and rendering tool named PRoGIS (Planetary Robotics GIS). PRoGIS is designed to give access to rover image archives in geographical context, using projected image view cones, obtained from existing meta-data and updated according to processing results, as a means to interact with and explore the archive. However PRoGIS is more than a source data explorer. It is linked to the PRoVIP (Planetary Robotics Vision Image Processing) system which includes photogrammetric processing tools to extract terrain models, compose panoramas, and explore and exploit multi-view stereo (where features on the surface have been imaged from different rover stops). We have started with the Opportunity MER rover as our test mission but the system is being designed to be multi-mission, taking advantage in particular of UCL MSSL's PDS mirror, and we intend to at least deal with both MER rovers and MSL. For the period of ProViDE until end of 2015 the further intent is to handle lunar and other Martian rover & descent camera data. The presentation discusses the challenges of integrating rover and orbital derived data into a single geographical framework, especially reconstructing view cones; our human-computer interaction intentions in creating an interface to the rover data that is accessible to planetary scientists; how we handle multi-mission data in the database; and a demonstration of the resulting system & its processing capabilities. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE.
Autonomous stair-climbing with miniature jumping robots.
Stoeter, Sascha A; Papanikolopoulos, Nikolaos
2005-04-01
The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.
NASA Technical Reports Server (NTRS)
1972-01-01
A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.
Machine vision for real time orbital operations
NASA Technical Reports Server (NTRS)
Vinz, Frank L.
1988-01-01
Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).
Event-driven visual attention for the humanoid robot iCub
Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara
2013-01-01
Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753
An automated miniature robotic vehicle inspection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobie, Gordon; Summan, Rahul; MacLeod, Charles
2014-02-18
A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3Dmore » model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software.« less
Social Robotics in Therapy of Apraxia of Speech
Alonso-Martín, Fernando
2018-01-01
Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction. PMID:29713440
Vision-guided micromanipulation system for biomedical application
NASA Astrophysics Data System (ADS)
Shim, Jae-Hong; Cho, Sung-Yong; Cha, Dong-Hyuk
2004-10-01
In these days, various researches for biomedical application of robots have been carried out. Particularly, robotic manipulation of the biological cells has been studied by many researchers. Usually, most of the biological cell's shape is sphere. Commercial biological manipulation systems have been utilized the 2-Dimensional images through the optical microscopes only. Moreover, manipulation of the biological cells mainly depends on the subjective viewpoint of an operator. Due to these reasons, there exist lots of problems such as slippery and destruction of the cell membrane and damage of the pipette tip etc. In order to overcome the problems, we have proposed a vision-guided biological cell manipulation system. The newly proposed manipulation system makes use of vision and graphic techniques. Through the proposed procedures, an operator can inject the biological cell scientifically and objectively. Also, the proposed manipulation system can measure the contact force occurred at injection of a biological cell. It can be transmitted a measured force to the operator by the proposed haptic device. Consequently, the proposed manipulation system could safely handle the biological cells without any damage. This paper presents the introduction of our vision-guided manipulation techniques and the concept of the contact force sensing. Through a series of experiments the proposed vision-guided manipulation system shows the possibility of application for precision manipulation of the biological cell such as DNA.
Manning, Todd G; Perera, Marlon; Christidis, Daniel; Kinnear, Ned; McGrath, Shannon; O'Beirne, Richard; Zotov, Paul; Bolton, Damien; Lawrentschuk, Nathan
2017-04-01
Maintenance of optimal vision during minimally invasive surgery is crucial to maintaining operative awareness, efficiency, and safety. Hampered vision is commonly caused by laparoscopic lens fogging (LLF), which has prompted the development of various antifogging fluids and warming devices. However, limited comparative evidence exists in contemporary literature. Despite technologic advancements there remains no consensus as to superior methods to prevent LLF or restore visual acuity once LLF has occurred. We performed a review of literature to present the current body of evidence supporting the use of numerous techniques. A standardized Preferred Reporting Items for Systematic Reviews and Meta-Analysis review was performed, and PubMed, Embase, Web of Science, and Google Scholar were searched. Articles pertaining to mechanisms and prevention of LLF were reviewed. We applied no limit to year of publication or publication type and all articles encountered were included in final review. Limited original research and heterogenous outcome measures precluded meta-analytical assessment. Vision loss has a multitude of causes and although scientific theory can be applied to in vivo environments, no authors have completely characterized this complex problem. No method to prevent or correct LLF was identified as superior to others and comparative evidence is minimal. Robotic LLF was poorly investigated and aside from a single analysis has not been directly compared to standard laparoscopic fogging in any capacity. Obscured vision during surgery is hazardous and typically caused by LLF. The etiology of LLF despite application of scientific theory is yet to be definitively proven in the in vivo environment. Common methods of prevention of LLF or restoration of vision due to LLF have little evidence-based data to support their use. A multiarm comparative in vivo analysis is required to formally assess these commonly used techniques in both standard and robotic laparoscopes.
Systems Analysis of Remote Piloting/Robotics Technology Applicable to Assault Rafts.
1982-01-01
LTOG 3.936 m n Driver Position Driver is only member of crew seated under armor - seated in front leftI of hull - 3 M17 periscopes - single piece...hatch cover. Vision Data Summary I.D - 3P; H - 82.5* to 165* V - 110 to 220 SC - Is not under armor ; therefore has freedom of vision. Mobility Information... under armor - seated in front left * of hull - 3 Ml7 periscopes - single piece hatch cover. 4Vision Data Summary SD - 3P; H - 82.50 to 1650 V - 110 to
A fuzzy structural matching scheme for space robotics vision
NASA Technical Reports Server (NTRS)
Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka
1994-01-01
In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.
Humanlike robots: the upcoming revolution in robotics
NASA Astrophysics Data System (ADS)
Bar-Cohen, Yoseph
2009-08-01
Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
A Face Attention Technique for a Robot Able to Interpret Facial Expressions
NASA Astrophysics Data System (ADS)
Simplício, Carlos; Prado, José; Dias, Jorge
Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Editor)
1990-01-01
Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.
Humanlike Robots - The Upcoming Revolution in Robotics
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoseph
2009-01-01
Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.
Vision requirements for Space Station applications
NASA Technical Reports Server (NTRS)
Crouse, K. R.
1985-01-01
Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.
Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots
2010-04-01
include the following: • Competition for the attention of the wearer/attentional tunneling • Interference with night vision devices • Occlusion...mainstays of society. It is difficult to walk down the street without seeing someone peering at their PDA or BlackBerry * as they are walking. However... BlackBerry is a registered trademark of Research in Motion Limited. 5 A major focus of the Air
Center of Excellence in Aerospace Manufacturing Automation
1983-11-01
affiliated industrial companies, who will pi,)vide financial support and ongoing guidance to the Institute. SIMA will encompass the design and management ...tactile sensing, intelligent systems for robot task management , and computer vision for robot management . We are addressing the question of how to provide...than anything today’s control systems could stably manage . To do this we have begun to develop a sequen- tial family of new manipulators that are
Trauma Pod: a semi-automated telerobotic surgical system.
Garcia, Pablo; Rosen, Jacob; Kapoor, Chetan; Noakes, Mark; Elbert, Greg; Treat, Michael; Ganous, Tim; Hanson, Matt; Manak, Joe; Hasser, Chris; Rohler, David; Satava, Richard
2009-06-01
The Trauma Pod (TP) vision is to develop a rapidly deployable robotic system to perform critical acute stabilization and/or surgical procedures, autonomously or in a teleoperative mode, on wounded soldiers in the battlefield who might otherwise die before treatment in a combat hospital could be provided. In the first phase of a project pursuing this vision, a robotic TP system was developed and its capability demonstrated by performing selected surgical procedures on a patient phantom. The system demonstrates the feasibility of performing acute stabilization procedures with the patient being the only human in the surgical cell. The teleoperated surgical robot is supported by autonomous robotic arms and subsystems that carry out scrub-nurse and circulating-nurse functions. Tool change and supply delivery are performed automatically and at least as fast as performed manually by nurses. Tracking and counting of the supplies is performed automatically. The TP system also includes a tomographic X-ray facility for patient diagnosis and two-dimensional (2D) fluoroscopic data to support interventions. The vast amount of clinical protocols generated in the TP system are recorded automatically. Automation and teleoperation capabilities form the basis for a more comprehensive acute diagnostic and management platform that will provide life-saving care in environments where surgical personnel are not present.
Dealing with robot-assisted surgery for rectal cancer: Current status and perspectives
Biffi, Roberto; Luca, Fabrizio; Bianchi, Paolo Pietro; Cenciarelli, Sabina; Petz, Wanda; Monsellato, Igor; Valvo, Manuela; Cossu, Maria Laura; Ghezzi, Tiago Leal; Shmaissany, Kassem
2016-01-01
The laparoscopic approach for treatment of rectal cancer has been proven feasible and oncologically safe, and is able to offer better short-term outcomes than traditional open procedures, mainly in terms of reduced length of hospital stay and time to return to working activity. In spite of this, the laparoscopic technique is usually practised only in high-volume experienced centres, mainly because it requires a prolonged and demanding learning curve. It has been estimated that over 50 operations are required for an experienced colorectal surgeon to achieve proficiency with this technique. Robotic surgery enables the surgeon to perform minimally invasive operations with better vision and more intuitive and precise control of the operating instruments, thus promising to overcome some of the technical difficulties associated with standard laparoscopy. It has high-definition three-dimensional vision, it translates the surgeon’s hand movements into precise movements of the instruments inside the patient, the camera is held and moved by the first surgeon, and a fourth robotic arm is available as a fixed retractor. The aim of this review is to summarise the current data on clinical and oncologic outcomes of robot-assisted surgery in rectal cancer, focusing on short- and long-term results, and providing original data from the authors’ centre. PMID:26811606
Improving Cognitive Skills of the Industrial Robot
NASA Astrophysics Data System (ADS)
Bezák, Pavol
2015-08-01
At present, there are plenty of industrial robots that are programmed to do the same repetitive task all the time. Industrial robots doing such kind of job are not able to understand whether the action is correct, effective or good. Object detection, manipulation and grasping is challenging due to the hand and object modeling uncertainties, unknown contact type and object stiffness properties. In this paper, the proposal of an intelligent humanoid hand object detection and grasping model is presented assuming that the object properties are known. The control is simulated in the Matlab Simulink/ SimMechanics, Neural Network Toolbox and Computer Vision System Toolbox.
Intelligent robot trends for factory automation
NASA Astrophysics Data System (ADS)
Hall, Ernest L.
1997-09-01
An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent economic and technical trends. The robotics industry now has a billion-dollar market in the U.S. and is growing. Feasibility studies are presented which also show unaudited healthy rates of return for a variety of robotic applications. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. However, the road from inspiration to successful application is still long and difficult, often taking decades to achieve a new product. More cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit both industry and society.
Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manges, W.W.; Hamel, W.R.; Weisbin, C.R.
1988-01-01
The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less
Smart mobile robot system for rubbish collection
NASA Astrophysics Data System (ADS)
Ali, Mohammed A. H.; Sien Siang, Tan
2018-03-01
This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.
A Demonstrator Intelligent Scheduler For Sensor-Based Robots
NASA Astrophysics Data System (ADS)
Perrotta, Gabriella; Allen, Charles R.; Shepherd, Andrew J.
1987-10-01
The development of an execution module capable of functioning as as on-line supervisor for a robot equipped with a vision sensor and tactile sensing gripper system is described. The on-line module is supported by two off-line software modules which provide a procedural based assembly constraints language to allow the assembly task to be defined. This input is then converted into a normalised and minimised form. The host Robot programming language permits high level motions to be issued at the to level, hence allowing a low programming overhead to the designer, who must describe the assembly sequence. Components are selected for pick and place robot movement, based on information derived from two cameras, one static and the other mounted on the end effector of the robot. The approach taken is multi-path scheduling as described by Fox pi. The system is seen to permit robot assembly in a less constrained parts presentation environment making full use of the sensory detail available on the robot.
System-level challenges in pressure-operated soft robotics
NASA Astrophysics Data System (ADS)
Onal, Cagdas D.
2016-05-01
Last decade witnessed the revival of fluidic soft actuation. As pressure-operated soft robotics becomes more popular with promising recent results, system integration remains an outstanding challenge. Inspired greatly by biology, we envision future robotic systems to embrace mechanical compliance with bodies composed of soft and hard components as well as electronic and sensing sub-systems, such that robot maintenance starts to resemble surgery. In this vision, portable energy sources and driving infrastructure plays a key role to offer autonomous many-DoF soft actuation. On the other hand, while offering many advantages in safety and adaptability to interact with unstructured environments, objects, and human bodies, mechanical compliance also violates many inherent assumptions in traditional rigid-body robotics. Thus, a complete soft robotic system requires new approaches to utilize proprioception that provides rich sensory information while remaining flexible, and motion control under significant time delay. This paper discusses our proposed solutions for each of these system-level challenges in soft robotics research.
Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits
NASA Astrophysics Data System (ADS)
Snider, Ross K.; Arathorn, David W.
2006-05-01
A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.
Higher-order neural network software for distortion invariant object recognition
NASA Technical Reports Server (NTRS)
Reid, Max B.; Spirkovska, Lilly
1991-01-01
The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.
ODIS the under-vehicle inspection robot: development status update
NASA Astrophysics Data System (ADS)
Freiburger, Lonnie A.; Smuda, William; Karlsen, Robert E.; Lakshmanan, Sridhar; Ma, Bing
2003-09-01
Unmanned ground vehicle (UGV) technology can be used in a number of ways to assist in counter-terrorism activities. Robots can be employed for a host of terrorism deterrence and detection applications. As reported in last year's Aerosense conference, the U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) and Utah State University (USU) have developed a tele-operated robot called ODIS (Omnidirectional Inspection System) that is particularly effective in performing under-vehicle inspections at security checkpoints. ODIS' continuing development for this task is heavily influenced by feedback received from soldiers and civilian law enforcement personnel using ODIS-prototypes in an operational environment. Our goal is to convince civilian law enforcement and military police to replace the traditional "mirror on a stick" system of looking under cars for bombs and contraband with ODIS. This paper reports our efforts in the past one year in terms of optimizing ODIS for the visual inspection task. Of particular concern is the design of the vision system. This paper documents details on the various issues relating to ODIS' vision system - sensor, lighting, image processing, and display.
Robotics in Cardiac Surgery: Past, Present, and Future
Bush, Bryan; Nifong, L. Wiley; Chitwood, W. Randolph
2013-01-01
Robotic cardiac operations evolved from minimally invasive operations and offer similar theoretical benefits, including less pain, shorter length of stay, improved cosmesis, and quicker return to preoperative level of functional activity. The additional benefits offered by robotic surgical systems include improved dexterity and degrees of freedom, tremor-free movements, ambidexterity, and the avoidance of the fulcrum effect that is intrinsic when using long-shaft endoscopic instruments. Also, optics and operative visualization are vastly improved compared with direct vision and traditional videoscopes. Robotic systems have been utilized successfully to perform complex mitral valve repairs, coronary revascularization, atrial fibrillation ablation, intracardiac tumor resections, atrial septal defect closures, and left ventricular lead implantation. The history and evolution of these procedures, as well as the present status and future directions of robotic cardiac surgery, are presented in this review. PMID:23908867
Parallel Algorithms for Computer Vision
1990-04-01
NA86-1, Thinking Machines Corporation, Cambridge, MA, December 1986. [43] J. Little, G. Blelloch, and T. Cass. How to program the connection machine for... to program the connection machine for computer vision. In Proc. Workshop on Comp. Architecture for Pattern Analysis and Machine Intell., 1987. [92] J...In Proceedings of SPIE Conf. on Advances in Intelligent Robotics Systems, Bellingham, VA, 1987. SPIE. [91] J. Little, G. Blelloch, and T. Cass. How
2011-02-07
Sensor UGVs (SUGV) or Disruptor UGVs, depending on their payload. The SUGVs included vision, GPS/IMU, and LIDAR systems for identifying and tracking...employed by all the MAGICian research groups. Objects of interest were tracked using standard LIDAR and Computer Vision template-based feature...tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous Locali- zation and Mapping ( SLAM ). Our system contains
Neuromorphic vision sensors and preprocessors in system applications
NASA Astrophysics Data System (ADS)
Kramer, Joerg; Indiveri, Giacomo
1998-09-01
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
Automatic Recognition Of Moving Objects And Its Application To A Robot For Picking Asparagus
NASA Astrophysics Data System (ADS)
Baylou, P.; Amor, B. El Hadj; Bousseau, G.
1983-10-01
After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.
Effects of Imperfect Automation on Operator’s Supervisory Control of Multiple Robots
2011-08-01
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) U.S. Army Research Laboratory ATTN: RDRL- HRM -AT Aberdeen Proving Ground, MD 21005-5425 8...Survey, the Ishihara Color Vision Test, and the Cube 6 Comparison test. Participants then received training and practice on the tasks they were about to...completing various tasks, several mini- exercises for practicing the steps, and exercises for performing the robotic control tasks. The type and
Effects of realistic force feedback in a robotic assisted minimally invasive surgery system.
Moradi Dalvand, Mohsen; Shirinzadeh, Bijan; Nahavandi, Saeid; Smith, Julian
2014-06-01
Robotic assisted minimally invasive surgery systems not only have the advantages of traditional laparoscopic procedures but also restore the surgeon's hand-eye coordination and improve the surgeon's precision by filtering hand tremors. Unfortunately, these benefits have come at the expense of the surgeon's ability to feel. Several research efforts have already attempted to restore this feature and study the effects of force feedback in robotic systems. The proposed methods and studies have some shortcomings. The main focus of this research is to overcome some of these limitations and to study the effects of force feedback in palpation in a more realistic fashion. A parallel robot assisted minimally invasive surgery system (PRAMiSS) with force feedback capabilities was employed to study the effects of realistic force feedback in palpation of artificial tissue samples. PRAMiSS is capable of actually measuring the tip/tissue interaction forces directly from the surgery site. Four sets of experiments using only vision feedback, only force feedback, simultaneous force and vision feedback and direct manipulation were conducted to evaluate the role of sensory feedback from sideways tip/tissue interaction forces with a scale factor of 100% in characterising tissues of varying stiffness. Twenty human subjects were involved in the experiments for at least 1440 trials. Friedman and Wilcoxon signed-rank tests were employed to statistically analyse the experimental results. Providing realistic force feedback in robotic assisted surgery systems improves the quality of tissue characterization procedures. Force feedback capability also increases the certainty of characterizing soft tissues compared with direct palpation using the lateral sides of index fingers. The force feedback capability can improve the quality of palpation and characterization of soft tissues of varying stiffness by restoring sense of touch in robotic assisted minimally invasive surgery operations.
Intelligent robot trends for 1998
NASA Astrophysics Data System (ADS)
Hall, Ernest L.
1998-10-01
An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent technical and economic trends. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has a 1.1 billion-dollar market in the U.S. and is growing. Feasibility studies results are presented which also show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society.
Self-organization via active exploration in robotic applications. Phase 2: Hybrid hardware prototype
NASA Technical Reports Server (NTRS)
Oegmen, Haluk
1993-01-01
In many environments human-like intelligent behavior is required from robots to assist and/or replace human operators. The purpose of these robots is to reduce human time and effort in various tasks. Thus the robot should be robust and as autonomous as possible in order to eliminate or to keep to a strict minimum its maintenance and external control. Such requirements lead to the following properties: fault tolerance, self organization, and intelligence. A good insight into implementing these properties in a robot can be gained by considering human behavior. In the first phase of this project, a neural network architecture was developed that captures some fundamental aspects of human categorization, habit, novelty, and reinforcement behavior. The model, called FRONTAL, is a 'cognitive unit' regulating the exploratory behavior of the robot. In the second phase of the project, FRONTAL was interfaced with an off-the-shelf robotic arm and a real-time vision system. The components of this robotic system, a review of FRONTAL, and simulation studies are presented in this report.
Current state of the art of vision based SLAM
NASA Astrophysics Data System (ADS)
Muhammad, Naveed; Fofi, David; Ainouz, Samia
2009-02-01
The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.
Vision systems for manned and robotic ground vehicles
NASA Astrophysics Data System (ADS)
Sanders-Reed, John N.; Koon, Phillip L.
2010-04-01
A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.
Cherry recognition in natural environment based on the vision of picking robot
NASA Astrophysics Data System (ADS)
Zhang, Qirong; Chen, Shanxiong; Yu, Tingzhong; Wang, Yan
2017-04-01
In order to realize the automatic recognition of cherry in the natural environment, this paper designed a robot vision system recognition method. The first step of this method is to pre-process the cherry image by median filtering. The second step is to identify the colour of the cherry through the 0.9R-G colour difference formula, and then use the Otsu algorithm for threshold segmentation. The third step is to remove noise by using the area threshold. The fourth step is to remove the holes in the cherry image by morphological closed and open operation. The fifth step is to obtain the centroid and contour of cherry by using the smallest external rectangular and the Hough transform. Through this recognition process, we can successfully identify 96% of the cherry without blocking and adhesion.
NASA Astrophysics Data System (ADS)
Åström, Anders; Forchheimer, Robert
2012-03-01
Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to- Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact solution with respect to hardware complexity, but also surprisingly high performance.
State-Estimation Algorithm Based on Computer Vision
NASA Technical Reports Server (NTRS)
Bayard, David; Brugarolas, Paul
2007-01-01
An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.
NASA Strategic Roadmap Summary Report
NASA Technical Reports Server (NTRS)
Wilson, Scott; Bauer, Frank; Stetson, Doug; Robey, Judee; Smith, Eric P.; Capps, Rich; Gould, Dana; Tanner, Mike; Guerra, Lisa; Johnston, Gordon
2005-01-01
In response to the Vision, NASA commissioned strategic and capability roadmap teams to develop the pathways for turning the Vision into a reality. The strategic roadmaps were derived from the Vision for Space Exploration and the Aldrich Commission Report dated June 2004. NASA identified 12 strategic areas for roadmapping. The Agency added a thirteenth area on nuclear systems because the topic affects the entire program portfolio. To ensure long-term public visibility and engagement, NASA established a committee for each of the 13 areas. These committees - made up of prominent members of the scientific and aerospace industry communities and senior government personnel - worked under the Federal Advisory Committee Act. A committee was formed for each of the following program areas: 1) Robotic and Human Lunar Exploration; 2) Robotic and Human Exploration of Mars; 3) Solar System Exploration; 4) Search for Earth-Like Planets; 5) Exploration Transportation System; 6) International Space Station; 7) Space Shuttle; 8) Universe Exploration; 9) Earth Science and Applications from Space; 10) Sun-Solar System Connection; 11) Aeronautical Technologies; 12) Education; 13) Nuclear Systems. This document contains roadmap summaries for 10 of these 13 program areas; The International Space Station, Space Shuttle, and Education are excluded. The completed roadmaps for the following committees: Robotic and Human Exploration of Mars; Solar System Exploration; Search for Earth-Like Planets; Universe Exploration; Earth Science and Applications from Space; Sun-Solar System Connection are collected in a separate Strategic Roadmaps volume. This document contains memebership rosters and charters for all 13 committees.
Panoramic stereo sphere vision
NASA Astrophysics Data System (ADS)
Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian
2013-01-01
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
Hand-Eye Calibration of Robonaut
NASA Technical Reports Server (NTRS)
Nickels, Kevin; Huber, Eric
2004-01-01
NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates.
A Detailed Evaluation of a Laser Triangulation Ranging System for Mobile Robots
1983-08-01
System Accuracy Factors ..................10 2.1.2 Detector "Cone of Vision" Problem ..................... 10 2. 1.3 Laser Triangulation Justification... product of these advances. Since 1968, when the effort began under a NASA grant, the project has undergone many changes both in the design goals and in...MD Vision System Accuracy Factors The accuracy of the data obtained by triangulation system depends on essentially three independent factors . They
Evaluation of novel technologies for the miniaturization of flash imaging lidar
NASA Astrophysics Data System (ADS)
Mitev, V.; Pollini, A.; Haesler, J.; Perenzoni, D.; Stoppa, D.; Kolleck, Christian; Chapuy, M.; Kervendal, E.; Pereira do Carmo, João.
2017-11-01
Planetary exploration constitutes one of the main components in the European Space activities. Missions to Mars, Moon and asteroids are foreseen where it is assumed that the human missions shall be preceded by robotic exploitation flights. The 3D vision is recognised as a key enabling technology in the relative proximity navigation of the space crafts, where imaging LiDAR is one of the best candidates for such 3D vision sensor.
Visual control of robots using range images.
Pomares, Jorge; Gil, Pablo; Torres, Fernando
2010-01-01
In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.
Machine intelligence and autonomy for aerospace systems
NASA Technical Reports Server (NTRS)
Heer, Ewald (Editor); Lum, Henry (Editor)
1988-01-01
The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.
Robotic thoracic surgery: technical considerations and learning curve for pulmonary resection.
Veronesi, Giulia
2014-05-01
Retrospective series indicate that robot-assisted approaches to lung cancer resection offer comparable radicality and safety to video-assisted thoracic surgery or open surgery. More intuitive movements, greater flexibility, and high-definition three-dimensional vision overcome limitations of video-assisted thoracic surgery and may encourage wider adoption of robotic surgery for lung cancer, particularly as more early stage cases are diagnosed by screening. High capital and running costs, limited instrument availability, and long operating times are important disadvantages. Entry of competitor companies should drive down costs. Studies are required to assess quality of life, morbidity, oncologic radicality, and cost effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.
Humans and robots: hand in grip.
Hubbard, G Scott
2005-01-01
As we move boldly forward into the 21st century, there has rarely been a more exciting time in which to contemplate the future of space exploration. The President of the United States has made a new and ambitious commitment to exploration of the solar system and beyond. Robotic partners will play a vital role in ensuring that the Vision is truly "sustainable and affordable". Relevant science and technology will be discussed with particular emphasis on expertise from NASA Ames Research Center of which the author is Director. The likely evolution of the balance between human explorers and robotic explorers will be addressed. c2005 Published by Elsevier Ltd.
A Haptic Guided Robotic System for Endoscope Positioning and Holding.
Cabuk, Burak; Ceylan, Savas; Anik, Ihsan; Tugasaygi, Mehtap; Kizir, Selcuk
2015-01-01
To determine the feasibility, advantages, and disadvantages of using a robot for holding and maneuvering the endoscope in transnasal transsphenoidal surgery. The system used in this study was a Stewart Platform based robotic system that was developed by Kocaeli University Department of Mechatronics Engineering for positioning and holding of endoscope. After the first use on an artificial head model, the system was used on six fresh postmortem bodies that were provided by the Morgue Specialization Department of the Forensic Medicine Institute (Istanbul, Turkey). The setup required for robotic system was easy, the time for registration procedure and setup of the robot takes 15 minutes. The resistance was felt on haptic arm in case of contact or friction with adjacent tissues. The adaptation process was shorter with the mouse to manipulate the endoscope. The endoscopic transsphenoidal approach was achieved with the robotic system. The endoscope was guided to the sphenoid ostium with the help of the robotic arm. This robotic system can be used in endoscopic transsphenoidal surgery as an endoscope positioner and holder. The robot is able to change the position easily with the help of an assistant and prevents tremor, and provides a better field of vision for work.
NASA Astrophysics Data System (ADS)
Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois
2010-01-01
After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.
Digital-Electronic/Optical Apparatus Would Recognize Targets
NASA Technical Reports Server (NTRS)
Scholl, Marija S.
1994-01-01
Proposed automatic target-recognition apparatus consists mostly of digital-electronic/optical cross-correlator that processes infrared images of targets. Infrared images of unknown targets correlated quickly with images of known targets. Apparatus incorporates some features of correlator described in "Prototype Optical Correlator for Robotic Vision System" (NPO-18451), and some of correlator described in "Compact Optical Correlator" (NPO-18473). Useful in robotic system; to recognize and track infrared-emitting, moving objects as variously shaped hot workpieces on conveyor belt.
Center for Neural Engineering: applications of pulse-coupled neural networks
NASA Astrophysics Data System (ADS)
Malkani, Mohan; Bodruzzaman, Mohammad; Johnson, John L.; Davis, Joel
1999-03-01
Pulsed-Coupled Neural Network (PCNN) is an oscillatory model neural network where grouping of cells and grouping among the groups that form the output time series (number of cells that fires in each input presentation also called `icon'). This is based on the synchronicity of oscillations. Recent work by Johnson and others demonstrated the functional capabilities of networks containing such elements for invariant feature extraction using intensity maps. PCNN thus presents itself as a more biologically plausible model with solid functional potential. This paper will present the summary of several projects and their results where we successfully applied PCNN. In project one, the PCNN was applied for object recognition and classification through a robotic vision system. The features (icons) generated by the PCNN were then fed into a feedforward neural network for classification. In project two, we developed techniques for sensory data fusion. The PCNN algorithm was implemented and tested on a B14 mobile robot. The PCNN-based features were extracted from the images taken from the robot vision system and used in conjunction with the map generated by data fusion of the sonar and wheel encoder data for the navigation of the mobile robot. In our third project, we applied the PCNN for speaker recognition. The spectrogram image of speech signals are fed into the PCNN to produce invariant feature icons which are then fed into a feedforward neural network for speaker identification.
Intelligence for Human-Assistant Planetary Surface Robots
NASA Technical Reports Server (NTRS)
Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.
2006-01-01
The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.
Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V
2014-09-01
Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Chen, Alexander Y.
1990-01-01
Scientific research associates advanced robotic system (SRAARS) is an intelligent robotic system which has autonomous learning capability in geometric reasoning. The system is equipped with one global intelligence center (GIC) and eight local intelligence centers (LICs). It controls mainly sixteen links with fourteen active joints, which constitute two articulated arms, an extensible lower body, a vision system with two CCD cameras and a mobile base. The on-board knowledge-based system supports the learning controller with model representations of both the robot and the working environment. By consecutive verifying and planning procedures, hypothesis-and-test routines and learning-by-analogy paradigm, the system would autonomously build up its own understanding of the relationship between itself (i.e., the robot) and the focused environment for the purposes of collision avoidance, motion analysis and object manipulation. The intelligence of SRAARS presents a valuable technical advantage to implement robotic systems for space exploration and space station operations.
Robotic Lunar Rover Technologies and SEI Supporting Technologies at Sandia National Laboratories
NASA Technical Reports Server (NTRS)
Klarer, Paul R.
1992-01-01
Existing robotic rover technologies at Sandia National Laboratories (SNL) can be applied toward the realization of a robotic lunar rover mission in the near term. Recent activities at the SNL-RVR have demonstrated the utility of existing rover technologies for performing remote field geology tasks similar to those envisioned on a robotic lunar rover mission. Specific technologies demonstrated include low-data-rate teleoperation, multivehicle control, remote site and sample inspection, standard bandwidth stereo vision, and autonomous path following based on both internal dead reckoning and an external position location update system. These activities serve to support the use of robotic rovers for an early return to the lunar surface by demonstrating capabilities that are attainable with off-the-shelf technology and existing control techniques. The breadth of technical activities at SNL provides many supporting technology areas for robotic rover development. These range from core competency areas and microsensor fabrication facilities, to actual space qualification of flight components that are designed and fabricated in-house.
Application of ultrasonic sensor for measuring distances in robotics
NASA Astrophysics Data System (ADS)
Zhmud, V. A.; Kondratiev, N. O.; Kuznetsov, K. A.; Trubin, V. G.; Dimitrov, L. V.
2018-05-01
Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.
[Usefullness of the Da Vinci robot in urologic surgery].
Iselin, C; Fateri, F; Caviezel, A; Schwartz, J; Hauser, J
2007-12-05
A telemanipulator for laparoscopic instruments is now available in the world of surgical robotics. This device has three distincts advantages over traditional laparoscopic surgery: it improves precision because of the many degrees of freedom of its instruments, and it offers 3-D vision so as better ergonomics for the surgeon. These characteristics are most useful for procedures that require delicate suturing in a focused operative field which may be difficult to reach. The Da Vinci robot has found its place in 2 domains of laparoscopic urologic surgery: radical prostatectomy and ureteral surgery. The cost of the robot, so as the price of its maintenance and instruments is high. This increases healthcare costs in comparison to open surgery, however not dramatically since patients stay less time in hospital and go back to work earlier.
Li, Luyang; Liu, Yun-Hui; Jiang, Tianjiao; Wang, Kai; Fang, Mu
2018-02-01
Despite tremendous efforts made for years, trajectory tracking control (TC) of a nonholonomic mobile robot (NMR) without global positioning system remains an open problem. The major reason is the difficulty to localize the robot by using its onboard sensors only. In this paper, a newly designed adaptive trajectory TC method is proposed for the NMR without its position, orientation, and velocity measurements. The controller is designed on the basis of a novel algorithm to estimate position and velocity of the robot online from visual feedback of an omnidirectional camera. It is theoretically proved that the proposed algorithm yields the TC errors to asymptotically converge to zero. Real-world experiments are conducted on a wheeled NMR to validate the feasibility of the control system.
NASA Technical Reports Server (NTRS)
Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.
2012-01-01
A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
Machine vision 1992-1996: technology program to promote research and its utilization in industry
NASA Astrophysics Data System (ADS)
Soini, Antti J.
1994-10-01
Machine vision technology has got a strong interest in Finnish research organizations, which is resulting in many innovative products to industry. Despite this end users were very skeptical towards machine vision and its robustness for harsh industrial environments. Therefore Technology Development Centre, TEKES, who funds technology related research and development projects in universities and individual companies, decided to start a national technology program, Machine Vision 1992 - 1996. Led by industry the program boosts research in machine vision technology and seeks to put the research results to work in practical industrial applications. The emphasis is in nationally important, demanding applications. The program will create new industry and business for machine vision producers and encourage the process and manufacturing industry to take advantage of this new technology. So far 60 companies and all major universities and research centers are working on our forty different projects. The key themes that we have are process control, robot vision and quality control.
Forward kinematic analysis of in-vivo robot for stomach biopsy.
Sutar, Mihir Kumar; Pathak, P M; Sharma, A K; Mehta, N K; Gupta, V K
2013-09-01
The introduction of robotic medical assistance in biopsy and stomach cavity exploration is one of the most important milestones in the field of medical science. The research is still in its infancy and many issues like limitations in dexterity, control, and abdominal cavity vision are the main concerns of many researchers around the globe. This paper presents the design aspects and the kinematic analysis of a 4 degrees of freedom (DOF) hyper-redundant in-vivo robot for stomach biopsy. The proposed robot will be inserted through the tool channel of a conventional 4-DOF endoscope and this will increase the dexterity and ease in reaching the furthest parts of the stomach beyond the duodenum. Unlike the traditional biopsy tool, the present design will enhance dexterity due to its 4 DOF in addition to the endoscope's DOF. The endoscope will be positioned at the entrance to the stomach in the esophagus and the robot will move to the desired position inside the stomach for biopsy and exploration. The current robot is wire-actuated and possesses better maneuverability. The forward kinematic analysis of the proposed robot is presented in this paper.
New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots
Gonzalez-de-Soto, Mariano; Pajares, Gonzalo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976
New trends in robotics for agriculture: integration and assessment of a real fleet of robots.
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.
Report on First International Workshop on Robotic Surgery in Thoracic Oncology.
Veronesi, Giulia; Cerfolio, Robert; Cingolani, Roberto; Rueckert, Jens C; Soler, Luc; Toker, Alper; Cariboni, Umberto; Bottoni, Edoardo; Fumagalli, Uberto; Melfi, Franca; Milli, Carlo; Novellis, Pierluigi; Voulaz, Emanuele; Alloisio, Marco
2016-01-01
A workshop of experts from France, Germany, Italy, and the United States took place at Humanitas Research Hospital Milan, Italy, on February 10 and 11, 2016, to examine techniques for and applications of robotic surgery to thoracic oncology. The main topics of presentation and discussion were robotic surgery for lung resection; robot-assisted thymectomy; minimally invasive surgery for esophageal cancer; new developments in computer-assisted surgery and medical applications of robots; the challenge of costs; and future clinical research in robotic thoracic surgery. The following article summarizes the main contributions to the workshop. The Workshop consensus was that since video-assisted thoracoscopic surgery (VATS) is becoming the mainstream approach to resectable lung cancer in North America and Europe, robotic surgery for thoracic oncology is likely to be embraced by an increasing numbers of thoracic surgeons, since it has technical advantages over VATS, including intuitive movements, tremor filtration, more degrees of manipulative freedom, motion scaling, and high-definition stereoscopic vision. These advantages may make robotic surgery more accessible than VATS to trainees and experienced surgeons and also lead to expanded indications. However, the high costs of robotic surgery and absence of tactile feedback remain obstacles to widespread dissemination. A prospective multicentric randomized trial (NCT02804893) to compare robotic and VATS approaches to stages I and II lung cancer will start shortly.
Development of a machine vision system for automated structural assembly
NASA Technical Reports Server (NTRS)
Sydow, P. Daniel; Cooper, Eric G.
1992-01-01
Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
Modeling the convergence accommodation of stereo vision for binocular endoscopy.
Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin
2018-02-01
The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.
Robot vision system programmed in Prolog
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.; Hack, Ralf
1995-10-01
This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)
NASA Technical Reports Server (NTRS)
Almeida, Eduardo DeBrito
2012-01-01
This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.
Discovery regarding visual neuron adaptation applicable to robot use
NASA Astrophysics Data System (ADS)
Korepanov, S.
1985-06-01
Scientists of the USSR Academy of Sciences' Institute of Higher Nervous Activity and Neurophysiology discovered a mechanism of light adaptation by organs of vision to changes in the brightness of light. Studies of the reaction of the visual center of the cerebral cortex showed that neurons in it are arranged in different ways: some, which are call classic neurons, have a fairly stable spatial orientation, while that of others is variable. It was found that vision operates chiefly on the basis of classic neurons in all conditions of illumination. Neurons of the second type are activated during sharp fluctuations of illumination. These neurons momentarily assume the orientation of the classic ones, thus serving as a kind of back-up for the primary system of the brain's visual center. Results of these studies will aid medical specialists in their practical work, as well as developers of image-recognition systems for new-generation robots.
Becker, Brian C.; Yang, Sungwook; MacLachlan, Robert A.; Riviere, Cameron N.
2012-01-01
Injecting clot-busting drugs such as t-PA into tiny vessels thinner than a human hair in the eye is a challenging procedure, especially since the vessels lie directly on top of the delicate and easily damaged retina. Various robotic aids have been proposed with the goal of increasing safety by removing tremor and increasing precision with motion scaling. We have developed a fully handheld micromanipulator, Micron, that has demonstrated reduced tremor when cannulating porcine retinal veins in an “open sky” scenario. In this paper, we present work towards handheld robotic cannulation with the goal of vision-based virtual fixtures guiding the tip of the cannula to the vessel. Using a realistic eyeball phantom, we address sclerotomy constraints, eye movement, and non-planar retina. Preliminary results indicate a handheld micromanipulator aided by visual control is a promising solution to retinal vessel occlusion. PMID:24649479
Reconfigurable assembly work station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yhu-Tin; Abell, Jeffrey A.; Spicer, John Patrick
A reconfigurable autonomous workstation includes a multi-faced superstructure including a horizontally-arranged frame section supported on a plurality of posts. The posts form a plurality of vertical faces arranged between adjacent pairs of the posts, the faces including first and second faces and a power distribution and position reference face. A controllable robotic arm suspends from the rectangular frame section, and a work table fixedly couples to the power distribution and position reference face. A plurality of conveyor tables are fixedly coupled to the work table including a first conveyor table through the first face and a second conveyor table throughmore » the second face. A vision system monitors the work table and each of the conveyor tables. A programmable controller monitors signal inputs from the vision system to identify and determine orientation of the component on the first conveyor table and control the robotic arm to execute an assembly task.« less
Experimental validation of docking and capture using space robotics testbeds
NASA Technical Reports Server (NTRS)
Spofford, John; Schmitz, Eric; Hoff, William
1991-01-01
This presentation describes the application of robotic and computer vision systems to validate docking and capture operations for space cargo transfer vehicles. Three applications are discussed: (1) air bearing systems in two dimensions that yield high quality free-flying, flexible, and contact dynamics; (2) validation of docking mechanisms with misalignment and target dynamics; and (3) computer vision technology for target location and real-time tracking. All the testbeds are supported by a network of engineering workstations for dynamic and controls analyses. Dynamic simulation of multibody rigid and elastic systems are performed with the TREETOPS code. MATRIXx/System-Build and PRO-MATLAB/Simulab are the tools for control design and analysis using classical and modern techniques such as H-infinity and LQG/LTR. SANDY is a general design tool to optimize numerically a multivariable robust compensator with a user-defined structure. Mathematica and Macsyma are used to derive symbolically dynamic and kinematic equations.
Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System
2016-01-01
This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Duda, R. O.; Fikes, R. E.; Hart, P. E.; Nilsson, N. J.; Thorndyke, P. W.; Wilber, B. M.
1971-01-01
Research in the field of artificial intelligence is discussed. The focus of recent work has been the design, implementation, and integration of a completely new system for the control of a robot that plans, learns, and carries out tasks autonomously in a real laboratory environment. The computer implementation of low-level and intermediate-level actions; routines for automated vision; and the planning, generalization, and execution mechanisms are reported. A scenario that demonstrates the approximate capabilities of the current version of the entire robot system is presented.
Adjustable Bracket For Entry Of Welding Wire
NASA Technical Reports Server (NTRS)
Gilbert, Jeffrey L.; Gutow, David A.
1993-01-01
Wire-entry bracket on welding torch in robotic welding system provides for adjustment of angle of entry of welding wire over range of plus or minus 30 degrees from nominal entry angle. Wire positioned so it does not hide weld joint in view of through-the-torch computer-vision system part of robot-controlling and -monitoring system. Swiveling bracket also used on nonvision torch on which wire-feed-through tube interferes with workpiece. Angle simply changed to one giving sufficient clearance.
Biomorphic Explorers Leading Towards a Robotic Ecology
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Miralles, Carlos; Chao, Tien-Hsin
1999-01-01
This paper presents viewgraphs of biomorphic explorers as they provide extended survival and useful life of robots in ecology. The topics include: 1) Biomorphic Explorers; 2) Advanced Mobility for Biomorphic Explorers; 3) Biomorphic Explorers: Size Based Classification; 4) Biomorphic Explorers: Classification (Based on Mobility and Ambient Environment); 5) Biomorphic Flight Systems: Vision; 6) Biomorphic Glider Deployment Concept: Larger Glider Deploy/Local Relay; 7) Biomorphic Glider Deployment Concept: Balloon Deploy/Dual Relay; 8) Biomorphic Exlplorer: Conceptual Design; 9) Biomorphic Gliders; and 10) Applications.
2014-09-01
college student alongside you, little sis! To Jes- xix sika Miller, Lauren Garcia and Caity White , my closest friends and confidants of ten years, who...arena corresponding coverage to the GUI is outlined in white 2.1.3 Challenges in the Model There are inherent challenges with any model that implements...source middleware originally maintained by Willow Garage [36] and now managed by the Open Source Robotics Foundation [37]. It provides a framework for
Supervising Remote Humanoids Across Intermediate Time Delay
NASA Technical Reports Server (NTRS)
Hambuchen, Kimberly; Bluethmann, William; Goza, Michael; Ambrose, Robert; Rabe, Kenneth; Allan, Mark
2006-01-01
The President's Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling humanoids under intermediate time delay is presented. This approach uses software running within a ground control cockpit to predict an immersed robot supervisor's motions which the remote humanoid autonomously executes. Initial results are presented.
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
Finding intrinsic rewards by embodied evolution and constrained reinforcement learning.
Uchibe, Eiji; Doya, Kenji
2008-12-01
Understanding the design principle of reward functions is a substantial challenge both in artificial intelligence and neuroscience. Successful acquisition of a task usually requires not only rewards for goals, but also for intermediate states to promote effective exploration. This paper proposes a method for designing 'intrinsic' rewards of autonomous agents by combining constrained policy gradient reinforcement learning and embodied evolution. To validate the method, we use Cyber Rodent robots, in which collision avoidance, recharging from battery packs, and 'mating' by software reproduction are three major 'extrinsic' rewards. We show in hardware experiments that the robots can find appropriate 'intrinsic' rewards for the vision of battery packs and other robots to promote approach behaviors.
Robotics supporting autonomy. 5th French Japanese Conference on Bio-ethics.
Gelin, Rodolphe
2013-12-01
The aim of this paper is to propose a new vision on robots. Generally seen as a threat against humanity or at least against employment, we will demonstrate that this new kind of machine can be a support not only for people in loss of autonomy but even for everyone. They will not replace people, they will assist them. The mass production of these companion robots will create a new industry that could take the relay of the automotive and the computer industries in this century. This access to the mass market will require solving technological and acceptability problems by a common work of researchers, engineers, users and the major stakeholders of our society.
Dickstein-Fischer, Laurie; Fischer, Gregory S
2014-01-01
It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.
Localization from Visual Landmarks on a Free-Flying Robot
NASA Technical Reports Server (NTRS)
Coltin, Brian; Fusco, Jesse; Moratto, Zack; Alexandrov, Oleg; Nakamura, Robert
2016-01-01
We present the localization approach for Astrobee,a new free-flying robot designed to navigate autonomously on board the International Space Station (ISS). Astrobee will conduct experiments in microgravity, as well as assisst astronauts and ground controllers. Astrobee replaces the SPHERES robots which currently operate on the ISS, which were limited to operating in a small cube since their localization system relied on triangulation from ultrasonic transmitters. Astrobee localizes with only monocular vision and an IMU, enabling it to traverse the entire US segment of the station. Features detected on a previously-built map, optical flow information,and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise.Finally, we extensively evaluate the behavior of the filter on atwo-dimensional testing surface.
Localization from Visual Landmarks on a Free-Flying Robot
NASA Technical Reports Server (NTRS)
Coltin, Brian; Fusco, Jesse; Moratto, Zack; Alexandrov, Oleg; Nakamura, Robert
2016-01-01
We present the localization approach for Astrobee, a new free-flying robot designed to navigate autonomously on the International Space Station (ISS). Astrobee will accommodate a variety of payloads and enable guest scientists to run experiments in zero-g, as well as assist astronauts and ground controllers. Astrobee will replace the SPHERES robots which currently operate on the ISS, whose use of fixed ultrasonic beacons for localization limits them to work in a 2 meter cube. Astrobee localizes with monocular vision and an IMU, without any environmental modifications. Visual features detected on a pre-built map, optical flow information, and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise, and extensively evaluate the localization algorithm.
Job-shop scheduling applied to computer vision
NASA Astrophysics Data System (ADS)
Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David
1997-09-01
This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.
An Experimental Study of an Ultra-Mobile Vehicle for Off-Road Transportation.
1983-07-01
implemented. 2.2.3 Image Processing Algorithms The ultimate goal of a vision system is to understand the content of a scene and to extract useful...to extract useful information from it. Four existing robot-vision systems, the General Motors CONSIGHT system, the UNIVISIUN system, the Westinghouse...cos + C . sino A (5.48) By taking out a comon factor, Eq. (5.48) can be rewritten as /-- c. ( B coso + C sine) A (5.49) 203 !_ - Let Z B sie4 = : v, VB2
Intelligent robot trends and predictions for the first year of the new millennium
NASA Astrophysics Data System (ADS)
Hall, Ernest L.
2000-10-01
An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The current use of these machines in outer space, medicine, hazardous materials, defense applications and industry is being pursued with vigor. In factory automation, industrial robots can improve productivity, increase product quality and improve competitiveness. The computer and the robot have both been developed during recent times. The intelligent robot combines both technologies and requires a thorough understanding and knowledge of mechatronics. Today's robotic machines are faster, cheaper, more repeatable, more reliable and safer than ever. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has more than a billion-dollar market in the U.S. and is growing. Feasibility studies show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society. The fearful robot stories may help us prevent future disaster. The inspirational robot ideas may inspire the scientists of tomorrow. However, the intelligent robot ideas, which can be reduced to practice, will change the world.
An embedded vision system for an unmanned four-rotor helicopter
NASA Astrophysics Data System (ADS)
Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James
2006-10-01
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.
Reinforcement learning in computer vision
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Burnaev, E. V.
2018-04-01
Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.
McNulty, Jason D; Klann, Tyler; Sha, Jin; Salick, Max; Knight, Gavin T; Turng, Lih-Sheng; Ashton, Randolph S
2014-06-07
Increased realization of the spatial heterogeneity found within in vivo tissue microenvironments has prompted the desire to engineer similar complexities into in vitro culture substrates. Microcontact printing (μCP) is a versatile technique for engineering such complexities onto cell culture substrates because it permits microscale control of the relative positioning of molecules and cells over large surface areas. However, challenges associated with precisely aligning and superimposing multiple μCP steps severely limits the extent of substrate modification that can be achieved using this method. Thus, we investigated the feasibility of using a vision guided selectively compliant articulated robotic arm (SCARA) for μCP applications. SCARAs are routinely used to perform high precision, repetitive tasks in manufacturing, and even low-end models are capable of achieving microscale precision. Here, we present customization of a SCARA to execute robotic-μCP (R-μCP) onto gold-coated microscope coverslips. The system not only possesses the ability to align multiple polydimethylsiloxane (PDMS) stamps but also has the capability to do so even after the substrates have been removed, reacted to graft polymer brushes, and replaced back into the system. Plus, non-biased computerized analysis shows that the system performs such sequential patterning with <10 μm precision and accuracy, which is equivalent to the repeatability specifications of the employed SCARA model. R-μCP should facilitate the engineering of complex in vivo-like complexities onto culture substrates and their integration with microfluidic devices.
NASA Astrophysics Data System (ADS)
Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.
2012-10-01
We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Hyperspectral imaging for nondestructive evaluation of tomatoes
USDA-ARS?s Scientific Manuscript database
Machine vision methods for quality and defect evaluation of tomatoes have been studied for online sorting and robotic harvesting applications. We investigated the use of a hyperspectral imaging system for quality evaluation and defect detection for tomatoes. Hyperspectral reflectance images were a...
NASA Astrophysics Data System (ADS)
Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.
2017-05-01
Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.
Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan
2013-01-01
Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831
The Design of Artificial Intelligence Robot Based on Fuzzy Logic Controller Algorithm
NASA Astrophysics Data System (ADS)
Zuhrie, M. S.; Munoto; Hariadi, E.; Muslim, S.
2018-04-01
Artificial Intelligence Robot is a wheeled robot driven by a DC motor that moves along the wall using an ultrasonic sensor as a detector of obstacles. This study uses ultrasonic sensors HC-SR04 to measure the distance between the robot with the wall based ultrasonic wave. This robot uses Fuzzy Logic Controller to adjust the speed of DC motor. When the ultrasonic sensor detects a certain distance, sensor data is processed on ATmega8 then the data goes to ATmega16. From ATmega16, sensor data is calculated based on Fuzzy rules to drive DC motor speed. The program used to adjust the speed of a DC motor is CVAVR program (Code Vision AVR). The readable distance of ultrasonic sensor is 3 cm to 250 cm with response time 0.5 s. Testing of robots on walls with a setpoint value of 9 cm to 10 cm produce an average error value of -12% on the wall of L, -8% on T walls, -8% on U wall, and -1% in square wall.
Fast instantaneous center of rotation estimation algorithm for a skied-steered robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2015-05-01
Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.
Control of a free-flying robot manipulator system
NASA Technical Reports Server (NTRS)
Alexander, H.; Cannon, R. H., Jr.
1985-01-01
The goal of the research is to develop and test control strategies for a self-contained, free flying space robot. Such a robot would perform operations in space similar to those currently handled by astronauts during extravehicular activity (EVA). The focus of the work is to develop and carry out a program of research with a series of physical Satellite Robot Simulator Vehicles (SRSV's), two-dimensionally freely mobile laboratory models of autonomous free-flying space robots such as might perform extravehicular functions associated with operation of a space station or repair of orbiting satellites. The development of the SRSV and of some of the controller subsystems are discribed. The two-link arm was fitted to the SRSV base, and researchers explored the open-loop characteristics of the arm and thruster actuators. Work began on building the software foundation necessary for use of the on-board computer, as well as hardware and software for a local vision system for target identification and tracking.
A developmental roadmap for learning by imitation in robots.
Lopes, Manuel; Santos-Victor, José
2007-04-01
In this paper, we present a strategy whereby a robot acquires the capability to learn by imitation following a developmental pathway consisting on three levels: 1) sensory-motor coordination; 2) world interaction; and 3) imitation. With these stages, the system is able to learn tasks by imitating human demonstrators. We describe results of the different developmental stages, involving perceptual and motor skills, implemented in our humanoid robot, Baltazar. At each stage, the system's attention is drawn toward different entities: its own body and, later on, objects and people. Our main contributions are the general architecture and the implementation of all the necessary modules until imitation capabilities are eventually acquired by the robot. Also, several other contributions are made at each level: learning of sensory-motor maps for redundant robots, a novel method for learning how to grasp objects, and a framework for learning task description from observation for program-level imitation. Finally, vision is used extensively as the sole sensing modality (sometimes in a simplified setting) avoiding the need for special data-acquisition hardware.
Proactive learning for artificial cognitive systems
NASA Astrophysics Data System (ADS)
Lee, Soo-Young
2010-04-01
The Artificial Cognitive Systems (ACS) will be developed for human-like functions such as vision, auditory, inference, and behavior. Especially, computational models and artificial HW/SW systems will be devised for Proactive Learning (PL) and Self-Identity (SI). The PL model provides bilateral interactions between robot and unknown environment (people, other robots, cyberspace). For the situation awareness in unknown environment it is required to receive audiovisual signals and to accumulate knowledge. If the knowledge is not enough, the PL should improve by itself though internet and others. For human-oriented decision making it is also required for the robot to have self-identify and emotion. Finally, the developed models and system will be mounted on a robot for the human-robot co-existing society. The developed ACS will be tested against the new Turing Test for the situation awareness. The Test problems will consist of several video clips, and the performance of the ACSs will be compared against those of human with several levels of cognitive ability.
An integrated dexterous robotic testbed for space applications
NASA Technical Reports Server (NTRS)
Li, Larry C.; Nguyen, Hai; Sauer, Edward
1992-01-01
An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.
Bioinspired decision architectures containing host and microbiome processing units.
Heyde, K C; Gallagher, P W; Ruder, W C
2016-09-27
Biomimetic robots have been used to explore and explain natural phenomena ranging from the coordination of ants to the locomotion of lizards. Here, we developed a series of decision architectures inspired by the information exchange between a host organism and its microbiome. We first modeled the biochemical exchanges of a population of synthetically engineered E. coli. We then built a physical, differential drive robot that contained an integrated, onboard computer vision system. A relay was established between the simulated population of cells and the robot's microcontroller. By placing the robot within a target-containing a two-dimensional arena, we explored how different aspects of the simulated cells and the robot's microcontroller could be integrated to form hybrid decision architectures. We found that distinct decision architectures allow for us to develop models of computation with specific strengths such as runtime efficiency or minimal memory allocation. Taken together, our hybrid decision architectures provide a new strategy for developing bioinspired control systems that integrate both living and nonliving components.
Homography-based visual servo regulation of mobile robots.
Fang, Yongchun; Dixon, Warren E; Dawson, Darren M; Chawda, Prakash
2005-10-01
A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.
Video. Natural Orifice Translumenal Endoscopic Surgery with a miniature in vivo surgical robot.
Lehman, Amy C; Dumpert, Jason; Wood, Nathan A; Visty, Abigail Q; Farritor, Shane M; Varnell, Brandon; Oleynikov, Dmitry
2009-07-01
The application of flexible endoscopy tools for Natural Orifice Translumenal Endoscopic Surgery (NOTES) is constrained due to limitations in dexterity, instrument insertion, navigation, visualization, and retraction. Miniature endolumenal robots can mitigate these constraints by providing a stable platform for visualization and dexterous manipulation. This video demonstrates the feasibility of using an endolumenal miniature robot to improve vision and to apply off-axis forces for task assistance in NOTES procedures. A two-armed miniature in vivo robot has been developed for NOTES. The robot is remotely controlled, has on-board cameras for guidance, and grasper and cautery end effectors for manipulation. Two basic configurations of the robot allow for flexibility during insertion and rigidity for visualization and tissue manipulation. Embedded magnets in the body of the robot and in an exterior surgical console are used for attaching the robot to the interior abdominal wall. This enables the surgeon to arbitrarily position the robot throughout a procedure. The visualization and task assistance capabilities of the miniature robot were demonstrated in a nonsurvivable NOTES procedure in a porcine model. An endoscope was used to create a transgastric incision and advance an overtube into the peritoneal cavity. The robot was then inserted through the overtube and into the peritoneal cavity using an endoscope. The surgeon successfully used the robot to explore the peritoneum and perform small-bowel dissection. This study has demonstrated the feasibility of inserting an endolumenal robot per os. Once deployed, the robot provided visualization and dexterous capabilities from multiple orientations. Further miniaturization and increased dexterity will enhance future capabilities.
Leal Ghezzi, Tiago; Campos Corleta, Oly
2016-10-01
The idea of reproducing himself with the use of a mechanical robot structure has been in man's imagination in the last 3000 years. However, the use of robots in medicine has only 30 years of history. The application of robots in surgery originates from the need of modern man to achieve two goals: the telepresence and the performance of repetitive and accurate tasks. The first "robot surgeon" used on a human patient was the PUMA 200 in 1985. In the 1990s, scientists developed the concept of "master-slave" robot, which consisted of a robot with remote manipulators controlled by a surgeon at a surgical workstation. Despite the lack of force and tactile feedback, technical advantages of robotic surgery, such as 3D vision, stable and magnified image, EndoWrist instruments, physiologic tremor filtering, and motion scaling, have been considered fundamental to overcome many of the limitations of the laparoscopic surgery. Since the approval of the da Vinci(®) robot by international agencies, American, European, and Asian surgeons have proved its factibility and safety for the performance of many different robot-assisted surgeries. Comparative studies of robotic and laparoscopic surgical procedures in general surgery have shown similar results with regard to perioperative, oncological, and functional outcomes. However, higher costs and lack of haptic feedback represent the major limitations of current robotic technology to become the standard technique of minimally invasive surgery worldwide. Therefore, the future of robotic surgery involves cost reduction, development of new platforms and technologies, creation and validation of curriculum and virtual simulators, and conduction of randomized clinical trials to determine the best applications of robotics.
Development of embedded real-time and high-speed vision platform
NASA Astrophysics Data System (ADS)
Ouyang, Zhenxing; Dong, Yimin; Yang, Hua
2015-12-01
Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.
Robotic ICSI (intracytoplasmic sperm injection).
Lu, Zhe; Zhang, Xuping; Leung, Clement; Esfandiari, Navid; Casper, Robert F; Sun, Yu
2011-07-01
This paper is the first report of robotic intracytoplasmic sperm injection (ICSI). ICSI is a clinical procedure performed worldwide in fertility clinics, requiring pick-up of a single sperm and insertion of it into an oocyte (i.e., egg cell). Since its invention 20 years ago, ICSI has been conducted manually by a handful of highly skilled embryologists; however, success rates vary significantly among clinics due to poor reproducibility and inconsistency across operators. We leverage our work in robotic cell injection to realize robotic ICSI and aim ultimately, to standardize how clinical ICSI is performed. This paper presents some of the technical aspects of our robotic ICSI system, including a cell holding device, motion control, and computer vision algorithms. The system performs visual tracking of single sperm, robotic immobilization of sperm, aspiration of sperm with picoliter volume, and insertion of sperm into an oocyte with a high degree of reproducibility. The system requires minimal human involvement (requiring only a few computer mouse clicks), and is human operator skill independent. Using the hamster oocyte-human sperm model in preliminary trials, the robotic system demonstrated a high success rate of 90.0% and survival rate of 90.7% (n=120). © 2011 IEEE
Applied estimation for hybrid dynamical systems using perceptional information
NASA Astrophysics Data System (ADS)
Plotnik, Aaron M.
This dissertation uses the motivating example of robotic tracking of mobile deep ocean animals to present innovations in robotic perception and estimation for hybrid dynamical systems. An approach to estimation for hybrid systems is presented that utilizes uncertain perceptional information about the system's mode to improve tracking of its mode and continuous states. This results in significant improvements in situations where previously reported methods of estimation for hybrid systems perform poorly due to poor distinguishability of the modes. The specific application that motivates this research is an automatic underwater robotic observation system that follows and films individual deep ocean animals. A first version of such a system has been developed jointly by the Stanford Aerospace Robotics Laboratory and Monterey Bay Aquarium Research Institute (MBARI). This robotic observation system is successfully fielded on MBARI's ROVs, but agile specimens often evade the system. When a human ROV pilot performs this task, one advantage that he has over the robotic observation system in these situations is the ability to use visual perceptional information about the target, immediately recognizing any changes in the specimen's behavior mode. With the approach of the human pilot in mind, a new version of the robotic observation system is proposed which is extended to (a) derive perceptional information (visual cues) about the behavior mode of the tracked specimen, and (b) merge this dissimilar, discrete and uncertain information with more traditional continuous noisy sensor data by extending existing algorithms for hybrid estimation. These performance enhancements are enabled by integrating techniques in hybrid estimation, computer vision and machine learning. First, real-time computer vision and classification algorithms extract a visual observation of the target's behavior mode. Existing hybrid estimation algorithms are extended to admit this uncertain but discrete observation, complementing the information available from more traditional sensors. State tracking is achieved using a new form of Rao-Blackwellized particle filter called the mode-observed Gaussian Particle Filter. Performance is demonstrated using data from simulation and data collected on actual specimens in the ocean. The framework for estimation using both traditional and perceptional information is easily extensible to other stochastic hybrid systems with mode-related perceptional observations available.
People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments
NASA Astrophysics Data System (ADS)
Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.
People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.
Robotic general surgery: current practice, evidence, and perspective.
Jung, M; Morel, P; Buehler, L; Buchs, N C; Hagen, M E
2015-04-01
Robotic technology commenced to be adopted for the field of general surgery in the 1990s. Since then, the da Vinci surgical system (Intuitive Surgical Inc, Sunnyvale, CA, USA) has remained by far the most commonly used system in this domain. The da Vinci surgical system is a master-slave machine that offers three-dimensional vision, articulated instruments with seven degrees of freedom, and additional software features such as motion scaling and tremor filtration. The specific design allows hand-eye alignment with intuitive control of the minimally invasive instruments. As such, robotic surgery appears technologically superior when compared with laparoscopy by overcoming some of the technical limitations that are imposed on the surgeon by the conventional approach. This article reviews the current literature and the perspective of robotic general surgery. While robotics has been applied to a wide range of general surgery procedures, its precise role in this field remains a subject of further research. Until now, only limited clinical evidence that could establish the use of robotics as the gold standard for procedures of general surgery has been created. While surgical robotics is still in its infancy with multiple novel systems currently under development and clinical trials in progress, the opportunities for this technology appear endless, and robotics should have a lasting impact to the field of general surgery.
Development of machine-vision system for gap inspection of muskmelon grafted seedlings.
Liu, Siyao; Xing, Zuochang; Wang, Zifan; Tian, Subo; Jahun, Falalu Rabiu
2017-01-01
Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.
Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness
NASA Technical Reports Server (NTRS)
Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.
2009-01-01
Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.
Interaction Challenges in Human-Robot Space Exploration
NASA Technical Reports Server (NTRS)
Fong, Terrence; Nourbakhsh, Illah
2005-01-01
In January 2004, NASA established a new, long-term exploration program to fulfill the President's Vision for U.S. Space Exploration. The primary goal of this program is to establish a sustained human presence in space, beginning with robotic missions to the Moon in 2008, followed by extended human expeditions to the Moon as early as 2015. In addition, the program places significant emphasis on the development of joint human-robot systems. A key difference from previous exploration efforts is that future space exploration activities must be sustainable over the long-term. Experience with the space station has shown that cost pressures will keep astronaut teams small. Consequently, care must be taken to extend the effectiveness of these astronauts well beyond their individual human capacity. Thus, in order to reduce human workload, costs, and fatigue-driven error and risk, intelligent robots will have to be an integral part of mission design.
Gaze-contingent soft tissue deformation tracking for minimally invasive robotic surgery.
Mylonas, George P; Stoyanov, Danail; Deligianni, Fani; Darzi, Ara; Yang, Guang-Zhong
2005-01-01
The introduction of surgical robots in Minimally Invasive Surgery (MIS) has allowed enhanced manual dexterity through the use of microprocessor controlled mechanical wrists. Although fully autonomous robots are attractive, both ethical and legal barriers can prohibit their practical use in surgery. The purpose of this paper is to demonstrate that it is possible to use real-time binocular eye tracking for empowering robots with human vision by using knowledge acquired in situ. By utilizing the close relationship between the horizontal disparity and the depth perception varying with the viewing distance, it is possible to use ocular vergence for recovering 3D motion and deformation of the soft tissue during MIS procedures. Both phantom and in vivo experiments were carried out to assess the potential frequency limit of the system and its intrinsic depth recovery accuracy. The potential applications of the technique include motion stabilization and intra-operative planning in the presence of large tissue deformation.
A salient region detection model combining background distribution measure for indoor robots.
Li, Na; Xu, Hui; Wang, Zhenhua; Sun, Lining; Chen, Guodong
2017-01-01
Vision system plays an important role in the field of indoor robot. Saliency detection methods, capturing regions that are perceived as important, are used to improve the performance of visual perception system. Most of state-of-the-art methods for saliency detection, performing outstandingly in natural images, cannot work in complicated indoor environment. Therefore, we propose a new method comprised of graph-based RGB-D segmentation, primary saliency measure, background distribution measure, and combination. Besides, region roundness is proposed to describe the compactness of a region to measure background distribution more robustly. To validate the proposed approach, eleven influential methods are compared on the DSD and ECSSD dataset. Moreover, we build a mobile robot platform for application in an actual environment, and design three different kinds of experimental constructions that are different viewpoints, illumination variations and partial occlusions. Experimental results demonstrate that our model outperforms existing methods and is useful for indoor mobile robots.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Robonaut: A Robotic Astronaut Assistant
NASA Technical Reports Server (NTRS)
Ambrose, Robert O.; Diftler, Myron A.
2001-01-01
NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.
Robotic follower experimentation results: ready for FCS increment I
NASA Astrophysics Data System (ADS)
Jaczkowski, Jeffrey J.
2003-09-01
Robotics is a fundamental enabling technology required to meet the U.S. Army's vision to be a strategically responsive force capable of domination across the entire spectrum of conflict. The U. S. Army Research, Development and Engineering Command (RDECOM) Tank Automotive Research, Development & Engineering Center (TARDEC), in partnership with the U.S. Army Research Laboratory, is developing a leader-follower capability for Future Combat Systems. The Robotic Follower Advanced Technology Demonstration (ATD) utilizes a manned leader to provide a highlevel proofing of the follower's path, which operates with minimal user intervention. This paper will give a programmatic overview and discuss both the technical approach and operational experimentation results obtained during testing conducted at Ft. Bliss, New Mexico in February-March 2003.
A Petri-net coordination model for an intelligent mobile robot
NASA Technical Reports Server (NTRS)
Wang, F.-Y.; Kyriakopoulos, K. J.; Tsolkas, A.; Saridis, G. N.
1990-01-01
The authors present a Petri net model of the coordination level of an intelligent mobile robot system (IMRS). The purpose of this model is to specify the integration of the individual efforts on path planning, supervisory motion control, and vision systems that are necessary for the autonomous operation of the mobile robot in a structured dynamic environment. This is achieved by analytically modeling the various units of the system as Petri net transducers and explicitly representing the task precedence and information dependence among them. The model can also be used to simulate the task processing and to evaluate the efficiency of operations and the responsibility of decisions in the coordination level of the IMRS. Some simulation results on the task processing and learning are presented.
The Da Vinci Xi and robotic radical prostatectomy-an evolution in learning and technique.
Goonewardene, S S; Cahill, D
2017-06-01
The da Vinci Xi robot has been introduced as the successor to the Si platform. The promise of the Xi is to open the door to new surgical procedures. For robotic-assisted radical prostatectomy (RARP)/pelvic surgery, the potential is better vision and longer instruments. How has the Xi impacted on operative and pathological parameters as indicators of surgical performance? This is a comparison of an initial series of 42 RARPs with the Xi system in 2015 with a series using the Si system immediately before Xi uptake in the same calendar year, and an Si series by the same surgeon synchronously as the Xi series using operative time, blood loss, and positive margins as surrogates of surgical performance. Subjectively and objectively, there is a learning curve to Xi uptake in longer operative times but no impact on T2 positive margins which are the most reflective single measure of RARP outcomes. Subjectively, the vision of the Xi is inferior to the Si system, and the integrated diathermy system and automated setup are quirky. All require experience to overcome. There is a learning curve to progress from the Si to Xi da Vinci surgical platforms, but this does not negatively impact the outcome.
Development of a vision non-contact sensing system for telerobotic applications
NASA Astrophysics Data System (ADS)
Karkoub, M.; Her, M.-G.; Ho, M.-I.; Huang, C.-C.
2013-08-01
The study presented here describes a novel vision-based motion detection system for telerobotic operations such as distant surgical procedures. The system uses a CCD camera and image processing to detect the motion of a master robot or operator. Colour tags are placed on the arm and head of a human operator to detect the up/down, right/left motion of the head as well as the right/left motion of the arm. The motion of the colour tags are used to actuate a slave robot or a remote system. The determination of the colour tags' motion is achieved through image processing using eigenvectors and colour system morphology and the relative head, shoulder and wrist rotation angles through inverse dynamics and coordinate transformation. A program is used to transform this motion data into motor control commands and transmit them to a slave robot or remote system through wireless internet. The system performed well even in complex environments with errors that did not exceed 2 pixels with a response time of about 0.1 s. The results of the experiments are available at: http://www.youtube.com/watch?v=yFxLaVWE3f8 and http://www.youtube.com/watch?v=_nvRcOzlWHw
Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting
NASA Astrophysics Data System (ADS)
Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing
2016-03-01
Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.
Fast vision-based catheter 3D reconstruction
NASA Astrophysics Data System (ADS)
Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.
2016-07-01
Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of ±0.6 mm and ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.
Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method
Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter
2015-01-01
An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”
2016-01-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540
Robotic Surgical Training in an Academic Institution
Chitwood, W. Randolph; Nifong, L. Wiley; Chapman, William H. H.; Felger, Jason E.; Bailey, B. Marcus; Ballint, Tara; Mendleson, Kim G.; Kim, Victor B.; Young, James A.; Albrecht, Robert A.
2001-01-01
Objective To detail robotic procedure development and clinical applications for mitral valve, biliary, and gastric reflux operations, and to implement a multispecialty robotic surgery training curriculum for both surgeons and surgical teams. Summary Background Data Remote, accurate telemanipulation of intracavitary instruments by general and cardiac surgeons is now possible. Complex technologic advancements in surgical robotics require well-designed training programs. Moreover, efficient robotic surgical procedures must be developed methodically and safely implemented clinically. Methods Advanced training on robotic systems provides surgeon confidence when operating in tiny intracavitary spaces. Three-dimensional vision and articulated instrument control are essential. The authors’ two da Vinci robotic systems have been dedicated to procedure development, clinical surgery, and training of surgical specialists. Their center has been the first United States site to train surgeons formally in clinical robotics. Results Established surgeons and residents have been trained using a defined robotic surgical educational curriculum. Also, 30 multispecialty teams have been trained in robotic mechanics and electronics. Initially, robotic procedures were developed experimentally and are described. In the past year the authors have performed 52 robotic-assisted clinical operations: 18 mitral valve repairs, 20 cholecystectomies, and 14 Nissen fundoplications. These respective operations required 108, 28, and 73 minutes of robotic telemanipulation to complete. Procedure times for the last half of the abdominal operations decreased significantly, as did the knot-tying time in mitral operations. There have been no deaths and few complications. One mitral patient had postoperative bleeding. Conclusion Robotic surgery can be performed safely with excellent results. The authors have developed an effective curriculum for training teams in robotic surgery. After training, surgeons have applied these methods effectively and safely. PMID:11573041
Three laws of robotics and surgery.
Moran, Michael
2008-08-01
In 1939, Isaac Asimov solidified the modern science fiction genre of robotics in his short story "Strange Playfellow" but altered our thinking about robots in Runaround in 1942 by formulating the Three Laws. He took an engineer's perspective on advanced robotic technologies. Surgical robots by definition violate the first law, yet his discussions are poignant for our understanding of future potential of robotic urologic surgery. We sought to better understand Asimov's visions by reading his fiction and autobiography. We then sought to place his perceptions of science fact next to the Three Laws (he later added a fourth law, the zeroth). Asimov's Three Laws are often quoted in medical journals during discussions about robotic surgery. His First Law states: "A robot may not injure a human being, or, through inaction, allow a human being to come to harm. " This philosophy would directly conflict with the application in surgery. In fact, most of his robotic stories deal with robots that come into conflicts with the laws. Robots in his cleverly orchestrated works evolve unique solutions to complex hierarchical conflicts with these laws. Asimov anticipated the coming maelstrom of intelligent robotic technologies with prescient unease. Despite his scholarly intuitions, he was able to fathom medical/surgical applications in many of his works. These fictional robotic physicians were able to overcome the first law and aid in the care and management of the sick/injured. Isaac Asimov published over 500 books on topics ranging from Shakespeare to science. Despite his widespread influence, he refused to visit the MIT robotics laboratory to see current, state-of-the-art systems. He managed to lay the foundation of modern robotic control systems with a human-oriented safety mechanism in his laws. "If knowledge can create problems, it is not through ignorance that we can solve them " (I Asimov).
A simple approach to a vision-guided unmanned vehicle
NASA Astrophysics Data System (ADS)
Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye
2005-10-01
This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.
A Starter's Guide to Artificial Intelligence.
ERIC Educational Resources Information Center
McConnell, Barry A.; McConnell, Nancy J.
1988-01-01
Discussion of the history and development of artificial intelligence (AI) highlights a bibliography of introductory books on various aspects of AI, including AI programing; problem solving; automated reasoning; game playing; natural language; expert systems; machine learning; robotics and vision; critics of AI; and representative software. (LRW)
NASA Astrophysics Data System (ADS)
Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.
2006-10-01
The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.
Present status and trends of image fusion
NASA Astrophysics Data System (ADS)
Xiang, Dachao; Fu, Sheng; Cai, Yiheng
2009-10-01
Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.
Multiple Optical Filter Design Simulation Results
NASA Astrophysics Data System (ADS)
Mendelsohn, J.; Englund, D. C.
1986-10-01
In this paper we continue our investigation of the application of matched filters to robotic vision problems. Specifically, we are concerned with the tray-picking problem. Our principal interest in this paper is the examination of summation affects which arise from attempting to reduce the matched filter memory size by averaging of matched filters. While the implementation of matched filtering theory to applications in pattern recognition or machine vision is ideally through the use of optics and optical correlators, in this paper the results were obtained through a digital simulation of the optical process.
Design issues for stereo vision systems used on tele-operated robotic platforms
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, Jim; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-02-01
The use of tele-operated Unmanned Ground Vehicles (UGVs) for military uses has grown significantly in recent years with operations in both Iraq and Afghanistan. In both cases the safety of the Soldier or technician performing the mission is improved by the large standoff distances afforded by the use of the UGV, but the full performance capability of the robotic system is not utilized due to insufficient depth perception provided by the standard two dimensional video system, causing the operator to slow the mission to ensure the safety of the UGV given the uncertainty of the perceived scene using 2D. To address this Polaris Sensor Technologies has developed, in a series of developments funded by the Leonard Wood Institute at Ft. Leonard Wood, MO, a prototype Stereo Vision Upgrade (SVU) Kit for the Foster-Miller TALON IV robot which provides the operator with improved depth perception and situational awareness, allowing for shorter mission times and higher success rates. Because there are multiple 2D cameras being replaced by stereo camera systems in the SVU Kit, and because the needs of the camera systems for each phase of a mission vary, there are a number of tradeoffs and design choices that must be made in developing such a system for robotic tele-operation. Additionally, human factors design criteria drive optical parameters of the camera systems which must be matched to the display system being used. The problem space for such an upgrade kit will be defined, and the choices made in the development of this particular SVU Kit will be discussed.
Novel Door-opening Method for Six-legged Robots Based on Only Force Sensing
NASA Astrophysics Data System (ADS)
Chen, Zhi-Jun; Gao, Feng; Pan, Yang
2017-09-01
Current door-opening methods are mainly developed on tracked, wheeled and biped robots by applying multi-DOF manipulators and vision systems. However, door-opening methods for six-legged robots are seldom studied, especially using 0-DOF tools to operate and only force sensing to detect. A novel door-opening method for six-legged robots is developed and implemented to the six-parallel-legged robot. The kinematic model of the six-parallel-legged robot is established and the model of measuring the positional relationship between the robot and the door is proposed. The measurement model is completely based on only force sensing. The real-time trajectory planning method and the control strategy are designed. The trajectory planning method allows the maximum angle between the sagittal axis of the robot body and the normal line of the door plane to be 45º. A 0-DOF tool mounted to the robot body is applied to operate. By integrating with the body, the tool has 6 DOFs and enough workspace to operate. The loose grasp achieved by the tool helps release the inner force in the tool. Experiments are carried out to validate the method. The results show that the method is effective and robust in opening doors wider than 1 m. This paper proposes a novel door-opening method for six-legged robots, which notably uses a 0-DOF tool and only force sensing to detect and open the door.
HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer.
Adamides, George; Katsanos, Christos; Parmet, Yisrael; Christou, Georgios; Xenos, Michalis; Hadzilacos, Thanasis; Edan, Yael
2017-07-01
Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ch'ien, Evelyn
2011-01-01
This paper describes how a linguistic form, rap, can evolve in tandem with technological advances and manifest human-machine creativity. Rather than assuming that the interplay between machines and technology makes humans robotic or machine-like, the paper explores how the pressure of executing artistic visions using technology can drive…
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
Robot Comedy Lab: experimenting with the social dynamics of live performance
Katevas, Kleomenis; Healey, Patrick G. T.; Harris, Matthew Tobias
2015-01-01
The success of live comedy depends on a performer's ability to “work” an audience. Ethnographic studies suggest that this involves the co-ordinated use of subtle social signals such as body orientation, gesture, gaze by both performers and audience members. Robots provide a unique opportunity to test the effects of these signals experimentally. Using a life-size humanoid robot, programmed to perform a stand-up comedy routine, we manipulated the robot's patterns of gesture and gaze and examined their effects on the real-time responses of a live audience. The strength and type of responses were captured using SHORE™computer vision analytics. The results highlight the complex, reciprocal social dynamics of performer and audience behavior. People respond more positively when the robot looks at them, negatively when it looks away and performative gestures also contribute to different patterns of audience response. This demonstrates how the responses of individual audience members depend on the specific interaction they're having with the performer. This work provides insights into how to design more effective, more socially engaging forms of robot interaction that can be used in a variety of service contexts. PMID:26379585
Control of a Quadcopter Aerial Robot Using Optic Flow Sensing
NASA Astrophysics Data System (ADS)
Hurd, Michael Brandon
This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).
Three main paradigms of simultaneous localization and mapping (SLAM) problem
NASA Astrophysics Data System (ADS)
Imani, Vandad; Haataja, Keijo; Toivanen, Pekka
2018-04-01
Simultaneous Localization and Mapping (SLAM) is one of the most challenging research areas within computer and machine vision for automated scene commentary and explanation. The SLAM technique has been a developing research area in the robotics context during recent years. By utilizing the SLAM method robot can estimate the different positions of the robot at the distinct points of time which can indicate the trajectory of robot as well as generate a map of the environment. SLAM has unique traits which are estimating the location of robot and building a map in the various types of environment. SLAM is effective in different types of environment such as indoor, outdoor district, Air, Underwater, Underground and Space. Several approaches have been investigated to use SLAM technique in distinct environments. The purpose of this paper is to provide an accurate perceptive review of case history of SLAM relied on laser/ultrasonic sensors and camera as perception input data. In addition, we mainly focus on three paradigms of SLAM problem with all its pros and cons. In the future, use intelligent methods and some new idea will be used on visual SLAM to estimate the motion intelligent underwater robot and building a feature map of marine environment.
System of launchable mesoscale robots for distributed sensing
NASA Astrophysics Data System (ADS)
Yesin, Kemal B.; Nelson, Bradley J.; Papanikolopoulos, Nikolaos P.; Voyles, Richard M.; Krantz, Donald G.
1999-08-01
A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.
Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
Learning gait of quadruped robot without prior knowledge of the environment
NASA Astrophysics Data System (ADS)
Xu, Tao; Chen, Qijun
2012-09-01
Walking is the basic skill of a legged robot, and one of the promising ways to improve the walking performance and its adaptation to environment changes is to let the robot learn its walking by itself. Currently, most of the walking learning methods are based on robot vision system or some external sensing equipment to estimate the walking performance of certain walking parameters, and therefore are usually only applicable under laboratory condition, where environment can be pre-defined. Inspired by the rhythmic swing movement during walking of legged animals and the behavior of their adjusting their walking gait on different walking surfaces, a concept of walking rhythmic pattern(WRP) is proposed to evaluate the walking specialty of legged robot, which is just based on the walking dynamics of the robot. Based on the onboard acceleration sensor data, a method to calculate WRP using power spectrum in frequency domain and diverse smooth filters is also presented. Since the evaluation of WRP is only based on the walking dynamics data of the robot's body, the proposed method doesn't require prior knowledge of environment and thus can be applied in unknown environment. A gait learning approach of legged robots based on WRP and evolution algorithm(EA) is introduced. By using the proposed approach, a quadruped robot can learn its locomotion by its onboard sensing in an unknown environment, where the robot has no prior knowledge about this place. The experimental result proves proportional relationship exits between WRP match score and walking performance of legged robot, which can be used to evaluate the walking performance in walking optimization under unknown environment.
Sensor control of robot arc welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1985-01-01
A basic problem in the application of robots for welding which is how to guide a torch along a weld seam using sensory information was studied. Improvement of the quality and consistency of certain Gas Tungsten Arc welds on the Space Shuttle Main Engine (SSME) that are too complex geometrically for conventional automation and therefore are done by hand was examined. The particular problems associated with space shuttle main egnine (SSME) manufacturing and weld-seam tracking with an emphasis on computer vision methods were analyzed. Special interface software for the MINC computr are developed which will allow it to be used both as a test system to check out the robot interface software and later as a development tool for further investigation of sensory systems to be incorporated in welding procedures.
Development of robots and application to industrial processes
NASA Technical Reports Server (NTRS)
Palm, W. J.; Liscano, R.
1984-01-01
An algorithm is presented for using a robot system with a single camera to position in three-dimensional space a slender object for insertion into a hole; for example, an electrical pin-type termination into a connector hole. The algorithm relies on a control-configured end effector to achieve the required horizontal translations and rotational motion, and it does not require camera calibration. A force sensor in each fingertip is integrated with the vision system to allow the robot to teach itself new reference points when different connectors and pins are used. Variability in the grasped orientation and position of the pin can be accomodated with the sensor system. Performance tests show that the system is feasible. More work is needed to determine more precisely the effects of lighting levels and lighting direction.
Automated site characterization for robotic sample acquisition systems
NASA Astrophysics Data System (ADS)
Scholl, Marija S.; Eberlein, Susan J.
1993-04-01
A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
3D laptop for defense applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.
A Blueprint of an International Lunar Robotic Village
NASA Technical Reports Server (NTRS)
Alkalai, Leon
2012-01-01
Human civilization is destined to look, find and develop a second habitable destination in our Solar System, besides Earth: Moon and Mars are the two most likely and credible places based on proximity, available local resources and economics Recent international missions have brought back valuable information on both Moon and Mars. The vision is: A permanent presence on the Moon using advanced robotic systems as precursors to the future human settlement of the Moon is possible in the near-term. An international effort should be initiated to create a permanent robotic village to demonstrate and validate advanced technologies and systems across international boundaries, conduct broad science, explore new regions of the Moon and Mars, develop infrastructure, human habitats and shelters, facilitate development of commerce and stimulate public involvement and education.
Working and Learning with Knowledge in the Lobes of a Humanoid's Mind
NASA Technical Reports Server (NTRS)
Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David
2003-01-01
Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).
Certainty grids for mobile robots
NASA Technical Reports Server (NTRS)
Moravec, H. P.
1987-01-01
A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.
A modular wireless in vivo surgical robot with multiple surgical applications.
Hawks, Jeff A; Rentschler, Mark E; Farritor, Shane; Oleynikov, Dmitry; Platt, Stephen R
2009-01-01
The use of miniature in vivo robots that fit entirely inside the peritoneal cavity represents a novel approach to laparoscopic surgery. Previous work demonstrates that both mobile and fixed-based robots can successfully operate inside the abdominal cavity. A modular wireless mobile platform has also been developed to provide surgical vision and task assistance. This paper presents an overview of recent test results of several possible surgical applications that can be accommodated by this modular platform. Applications such as a biopsy grasper, stapler and clamp, video camera, and physiological sensors have been integrated into the wireless platform and tested in vivo in a porcine model. The modular platform facilitates rapid development and conversion from one type of surgical task assistance to another. These self-contained surgical devices are much more transportable and much lower in cost than current robotic surgical assistants. These devices could ultimately be carried and deployed by non-medical personnel at the site of an injury. A remotely located surgeon could use these robots to provide critical first response medical intervention.
An experiment in vision based autonomous grasping within a reduced gravity environment
NASA Technical Reports Server (NTRS)
Grimm, K. A.; Erickson, J. D.; Anderson, G.; Chien, C. H.; Hewgill, L.; Littlefield, M.; Norsworthy, R.
1992-01-01
The National Aeronautics and Space Administration's Reduced Gravity Program (RGP) offers opportunities for experimentation in gravities of less than one-g. The Extravehicular Activity Helper/Retriever (EVAHR) robot project of the Automation and Robotics Division at the Lyndon B. Johnson Space Center in Houston, Texas, is undertaking a task that will culminate in a series of tests in simulated zero-g using this facility. A subset of the final robot hardware consisting of a three-dimensional laser mapper, a Robotics Research 807 arm, a Jameson JH-5 hand, and the appropriate interconnect hardware/software will be used. This equipment will be flown on the RGP's KC-135 aircraft. This aircraft will fly a series of parabolas creating the effect of zero-g. During the periods of zero-g, a number of objects will be released in front of the fixed base robot hardware in both static and dynamic configurations. The system will then inspect the object, determine the objects pose, plan a grasp strategy, and execute the grasp. This must all be accomplished in the approximately 27 seconds of zero-g.
Biomimetic vibrissal sensing for robots
Pearson, Martin J.; Mitchinson, Ben; Sullivan, J. Charles; Pipe, Anthony G.; Prescott, Tony J.
2011-01-01
Active vibrissal touch can be used to replace or to supplement sensory systems such as computer vision and, therefore, improve the sensory capacity of mobile robots. This paper describes how arrays of whisker-like touch sensors have been incorporated onto mobile robot platforms taking inspiration from biology for their morphology and control. There were two motivations for this work: first, to build a physical platform on which to model, and therefore test, recent neuroethological hypotheses about vibrissal touch; second, to exploit the control strategies and morphology observed in the biological analogue to maximize the quality and quantity of tactile sensory information derived from the artificial whisker array. We describe the design of a new whiskered robot, Shrewbot, endowed with a biomimetic array of individually controlled whiskers and a neuroethologically inspired whisking pattern generation mechanism. We then present results showing how the morphology of the whisker array shapes the sensory surface surrounding the robot's head, and demonstrate the impact of active touch control on the sensory information that can be acquired by the robot. We show that adopting bio-inspired, low latency motor control of the rhythmic motion of the whiskers in response to contact-induced stimuli usefully constrains the sensory range, while also maximizing the number of whisker contacts. The robot experiments also demonstrate that the sensory consequences of active touch control can be usefully investigated in biomimetic robots. PMID:21969690
Biomimetic vibrissal sensing for robots.
Pearson, Martin J; Mitchinson, Ben; Sullivan, J Charles; Pipe, Anthony G; Prescott, Tony J
2011-11-12
Active vibrissal touch can be used to replace or to supplement sensory systems such as computer vision and, therefore, improve the sensory capacity of mobile robots. This paper describes how arrays of whisker-like touch sensors have been incorporated onto mobile robot platforms taking inspiration from biology for their morphology and control. There were two motivations for this work: first, to build a physical platform on which to model, and therefore test, recent neuroethological hypotheses about vibrissal touch; second, to exploit the control strategies and morphology observed in the biological analogue to maximize the quality and quantity of tactile sensory information derived from the artificial whisker array. We describe the design of a new whiskered robot, Shrewbot, endowed with a biomimetic array of individually controlled whiskers and a neuroethologically inspired whisking pattern generation mechanism. We then present results showing how the morphology of the whisker array shapes the sensory surface surrounding the robot's head, and demonstrate the impact of active touch control on the sensory information that can be acquired by the robot. We show that adopting bio-inspired, low latency motor control of the rhythmic motion of the whiskers in response to contact-induced stimuli usefully constrains the sensory range, while also maximizing the number of whisker contacts. The robot experiments also demonstrate that the sensory consequences of active touch control can be usefully investigated in biomimetic robots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.
1987-03-01
This document reviews research accomplishments achieved by the staff of the Center for Engineering Systems Advanced Research (CESAR) during the fiscal years 1984 through 1987. The manuscript also describes future CESAR objectives for the 1988-1991 planning horizon, and beyond. As much as possible, the basic research goals are derived from perceived Department of Energy (DOE) needs for increased safety, productivity, and competitiveness in the United States energy producing and consuming facilities. Research areas covered include the HERMIES-II Robot, autonomous robot navigation, hypercube computers, machine vision, and manipulators.
HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.
Lin, Huei-Yung; Wang, Min-Liang
2014-09-04
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.
HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots
Lin, Huei-Yung.; Wang, Min-Liang.
2014-01-01
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317
Plutonium immobilization can loading FY99 component test report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriikku, E.
2000-06-01
This report summarizes FY99 Can Loading work completed for the Plutonium Immobilization Project and it includes details about the Helium hood, cold pour cans, Can Loading robot, vision system, magnetically coupled ray cart and lifts, system integration, Can Loading glovebox layout, and an FY99 cost table.
Artificial Intelligence and the High School Computer Curriculum.
ERIC Educational Resources Information Center
Dillon, Richard W.
1993-01-01
Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…
A Segway RMP-based robotic transport system
NASA Astrophysics Data System (ADS)
Nguyen, Hoa G.; Kogut, Greg; Barua, Ripan; Burmeister, Aaron; Pezeshkian, Narek; Powell, Darren; Farrington, Nathan; Wimmer, Matt; Cicchetto, Brett; Heng, Chana; Ramirez, Velia
2004-12-01
In the area of logistics, there currently is a capability gap between the one-ton Army robotic Multifunction Utility/Logistics and Equipment (MULE) vehicle and a soldier"s backpack. The Unmanned Systems Branch at Space and Naval Warfare Systems Center (SPAWAR Systems Center, or SSC), San Diego, with the assistance of a group of interns from nearby High Tech High School, has demonstrated enabling technologies for a solution that fills this gap. A small robotic transport system has been developed based on the Segway Robotic Mobility Platform (RMP). We have demonstrated teleoperated control of this robotic transport system, and conducted two demonstrations of autonomous behaviors. Both demonstrations involved a robotic transporter following a human leader. In the first demonstration, the transporter used a vision system running a continuously adaptive mean-shift filter to track and follow a human. In the second demonstration, the separation between leader and follower was significantly increased using Global Positioning System (GPS) information. The track of the human leader, with a GPS unit in his backpack, was sent wirelessly to the transporter, also equipped with a GPS unit. The robotic transporter traced the path of the human leader by following these GPS breadcrumbs. We have additionally demonstrated a robotic medical patient transport capability by using the Segway RMP to power a mock-up of the Life Support for Trauma and Transport (LSTAT) patient care platform, on a standard NATO litter carrier. This paper describes the development of our demonstration robotic transport system and the various experiments conducted.
Robotic technology in surgery: current status in 2008.
Murphy, Declan G; Hall, Rohan; Tong, Raymond; Goel, Rajiv; Costello, Anthony J
2008-12-01
There is increasing patient and surgeon interest in robotic-assisted surgery, particularly with the proliferation of da Vinci surgical systems (Intuitive Surgical, Sunnyvale, CA, USA) throughout the world. There is much debate over the usefulness and cost-effectiveness of these systems. The currently available robotic surgical technology is described. Published data relating to the da Vinci system are reviewed and the current status of surgical robotics within Australia and New Zealand is assessed. The first da Vinci system in Australia and New Zealand was installed in 2003. Four systems had been installed by 2006 and seven systems are currently in use. Most of these are based in private hospitals. Technical advantages of this system include 3-D vision, enhanced dexterity and improved ergonomics when compared with standard laparoscopic surgery. Most procedures currently carried out are urological, with cardiac, gynaecological and general surgeons also using this system. The number of patients undergoing robotic-assisted surgery in Australia and New Zealand has increased fivefold in the past 4 years. The most common procedure carried out is robotic-assisted laparoscopic radical prostatectomy. Published data suggest that robotic-assisted surgery is feasible and safe although the installation and recurring costs remain high. There is increasing acceptance of robotic-assisted surgery, especially for urological procedures. The da Vinci surgical system is becoming more widely available in Australia and New Zealand. Other surgical specialties will probably use this technology. Significant costs are associated with robotic technology and it is not yet widely available to public patients.
A robotic wheelchair trainer: design overview and a feasibility study
2010-01-01
Background Experiencing independent mobility is important for children with a severe movement disability, but learning to drive a powered wheelchair can be labor intensive, requiring hand-over-hand assistance from a skilled therapist. Methods To improve accessibility to training, we developed a robotic wheelchair trainer that steers itself along a course marked by a line on the floor using computer vision, haptically guiding the driver's hand in appropriate steering motions using a force feedback joystick, as the driver tries to catch a mobile robot in a game of "robot tag". This paper provides a detailed design description of the computer vision and control system. In addition, we present data from a pilot study in which we used the chair to teach children without motor impairment aged 4-9 (n = 22) to drive the wheelchair in a single training session, in order to verify that the wheelchair could enable learning by the non-impaired motor system, and to establish normative values of learning rates. Results and Discussion Training with haptic guidance from the robotic wheelchair trainer improved the steering ability of children without motor impairment significantly more than training without guidance. We also report the results of a case study with one 8-year-old child with a severe motor impairment due to cerebral palsy, who replicated the single-session training protocol that the non-disabled children participated in. This child also improved steering ability after training with guidance from the joystick by an amount even greater than the children without motor impairment. Conclusions The system not only provided a safe, fun context for automating driver's training, but also enhanced motor learning by the non-impaired motor system, presumably by demonstrating through intuitive movement and force of the joystick itself exemplary control to follow the course. The case study indicates that a child with a motor system impaired by CP can also gain a short-term benefit from driver's training with haptic guidance. PMID:20707886
A robotic wheelchair trainer: design overview and a feasibility study.
Marchal-Crespo, Laura; Furumasu, Jan; Reinkensmeyer, David J
2010-08-13
Experiencing independent mobility is important for children with a severe movement disability, but learning to drive a powered wheelchair can be labor intensive, requiring hand-over-hand assistance from a skilled therapist. To improve accessibility to training, we developed a robotic wheelchair trainer that steers itself along a course marked by a line on the floor using computer vision, haptically guiding the driver's hand in appropriate steering motions using a force feedback joystick, as the driver tries to catch a mobile robot in a game of "robot tag". This paper provides a detailed design description of the computer vision and control system. In addition, we present data from a pilot study in which we used the chair to teach children without motor impairment aged 4-9 (n = 22) to drive the wheelchair in a single training session, in order to verify that the wheelchair could enable learning by the non-impaired motor system, and to establish normative values of learning rates. Training with haptic guidance from the robotic wheelchair trainer improved the steering ability of children without motor impairment significantly more than training without guidance. We also report the results of a case study with one 8-year-old child with a severe motor impairment due to cerebral palsy, who replicated the single-session training protocol that the non-disabled children participated in. This child also improved steering ability after training with guidance from the joystick by an amount even greater than the children without motor impairment. The system not only provided a safe, fun context for automating driver's training, but also enhanced motor learning by the non-impaired motor system, presumably by demonstrating through intuitive movement and force of the joystick itself exemplary control to follow the course. The case study indicates that a child with a motor system impaired by CP can also gain a short-term benefit from driver's training with haptic guidance.
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
Self-adaptive robot training of stroke survivors for continuous tracking movements.
Vergaro, Elena; Casadio, Maura; Squeri, Valentina; Giannoni, Psiche; Morasso, Pietro; Sanguineti, Vittorio
2010-03-15
Although robot therapy is progressively becoming an accepted method of treatment for stroke survivors, few studies have investigated how to adapt the robot/subject interaction forces in an automatic way. The paper is a feasibility study of a novel self-adaptive robot controller to be applied with continuous tracking movements. The haptic robot Braccio di Ferro is used, in relation with a tracking task. The proposed control architecture is based on three main modules: 1) a force field generator that combines a non linear attractive field and a viscous field; 2) a performance evaluation module; 3) an adaptive controller. The first module operates in a continuous time fashion; the other two modules operate in an intermittent way and are triggered at the end of the current block of trials. The controller progressively decreases the gain of the force field, within a session, but operates in a non monotonic way between sessions: it remembers the minimum gain achieved in a session and propagates it to the next one, which starts with a block whose gain is greater than the previous one. The initial assistance gains are chosen according to a minimal assistance strategy. The scheme can also be applied with closed eyes in order to enhance the role of proprioception in learning and control. The preliminary results with a small group of patients (10 chronic hemiplegic subjects) show that the scheme is robust and promotes a statistically significant improvement in performance indicators as well as a recalibration of the visual and proprioceptive channels. The results confirm that the minimally assistive, self-adaptive strategy is well tolerated by severely impaired subjects and is beneficial also for less severe patients. The experiments provide detailed information about the stability and robustness of the adaptive controller of robot assistance that could be quite relevant for the design of future large scale controlled clinical trials. Moreover, the study suggests that including continuous movement in the repertoire of training is acceptable also by rather severely impaired subjects and confirms the stabilizing effect of alternating vision/no vision trials already found in previous studies.
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, N.S.V.; Kareti, S.; Shi, Weimin
A formal framework for navigating a robot in a geometric terrain by an unknown set of obstacles is considered. Here the terrain model is not a priori known, but the robot is equipped with a sensor system (vision or touch) employed for the purpose of navigation. The focus is restricted to the non-heuristic algorithms which can be theoretically shown to be correct within a given framework of models for the robot, terrain and sensor system. These formulations, although abstract and simplified compared to real-life scenarios, provide foundations for practical systems by highlighting the underlying critical issues. First, the authors considermore » the algorithms that are shown to navigate correctly without much consideration given to the performance parameters such as distance traversed, etc. Second, they consider non-heuristic algorithms that guarantee bounds on the distance traversed or the ratio of the distance traversed to the shortest path length (computed if the terrain model is known). Then they consider the navigation of robots with very limited computational capabilities such as finite automata, etc.« less
NASA Project Constellation Systems Engineering Approach
NASA Technical Reports Server (NTRS)
Dumbacher, Daniel L.
2005-01-01
NASA's Office of Exploration Systems (OExS) is organized to empower the Vision for Space Exploration with transportation systems that result in achievable, affordable, and sustainable human and robotic journeys to the Moon, Mars, and beyond. In the process of delivering these capabilities, the systems engineering function is key to implementing policies, managing mission requirements, and ensuring technical integration and verification of hardware and support systems in a timely, cost-effective manner. The OExS Development Programs Division includes three main areas: (1) human and robotic technology, (2) Project Prometheus for nuclear propulsion development, and (3) Constellation Systems for space transportation systems development, including a Crew Exploration Vehicle (CEV). Constellation Systems include Earth-to-orbit, in-space, and surface transportation systems; maintenance and science instrumentation; and robotic investigators and assistants. In parallel with development of the CEV, robotic explorers will serve as trailblazers to reduce the risk and costs of future human operations on the Moon, as well as missions to other destinations, including Mars. Additional information is included in the original extended abstract.
Speed control for a mobile robot
NASA Astrophysics Data System (ADS)
Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.
Bhatia, Parisha; Mohamed, Hossam Eldin; Kadi, Abida; Walvekar, Rohan R.
2015-01-01
Robot assisted thyroid surgery has been the latest advance in the evolution of thyroid surgery after endoscopy assisted procedures. The advantage of a superior field vision and technical advancements of robotic technology have permitted novel remote access (trans-axillary and retro-auricular) surgical approaches. Interestingly, several remote access surgical ports using robot surgical system and endoscopic technique have been customized to avoid the social stigma of a visible scar. Current literature has displayed their various advantages in terms of post-operative outcomes; however, the associated financial burden and also additional training and expertise necessary hinder its widespread adoption into endocrine surgery practices. These approaches offer excellent cosmesis, with a shorter learning curve and reduce discomfort to surgeons operating ergonomically through a robotic console. This review aims to provide details of various remote access techniques that are being offered for thyroid resection. Though these have been reported to be safe and feasible approaches for thyroid surgery, further evaluation for their efficacy still remains. PMID:26425450
Adaptive Feedback in Local Coordinates for Real-time Vision-Based Motion Control Over Long Distances
NASA Astrophysics Data System (ADS)
Aref, M. M.; Astola, P.; Vihonen, J.; Tabus, I.; Ghabcheloo, R.; Mattila, J.
2018-03-01
We studied the differences in noise-effects, depth-correlated behavior of sensors, and errors caused by mapping between coordinate systems in robotic applications of machine vision. In particular, the highly range-dependent noise densities for semi-unknown object detection were considered. An equation is proposed to adapt estimation rules to dramatic changes of noise over longer distances. This algorithm also benefits the smooth feedback of wheels to overcome variable latencies of visual perception feedback. Experimental evaluation of the integrated system is presented with/without the algorithm to highlight its effectiveness.
Active vision in satellite scene analysis
NASA Technical Reports Server (NTRS)
Naillon, Martine
1994-01-01
In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.
Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2012-01-01
Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.
A Sustained Proximity Network for Multi-Mission Lunar Exploration
NASA Technical Reports Server (NTRS)
Soloff, Jason A.; Noreen, Gary; Deutsch, Leslie; Israel, David
2005-01-01
Tbe Vision for Space Exploration calls for an aggressive sequence of robotic missions beginning in 2008 to prepare for a human return to the Moon by 2020, with the goal of establishing a sustained human presence beyond low Earth orbit. A key enabler of exploration is reliable, available communication and navigation capabilities to support both human and robotic missions. An adaptable, sustainable communication and navigation architecture has been developed by Goddard Space Flight Center and the Jet Propulsion Laboratory to support human and robotic lunar exploration through the next two decades. A key component of the architecture is scalable deployment, with the infrastructure evolving as needs emerge, allowing NASA and its partner agencies to deploy an interoperable communication and navigation system in an evolutionary way, enabling cost effective, highly adaptable systems throughout the lunar exploration program.
Development of a table tennis robot for ball interception using visual feedback
NASA Astrophysics Data System (ADS)
Parnichkun, Manukid; Thalagoda, Janitha A.
2016-07-01
This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.
Self-development of visual space perception by learning from the hand
NASA Astrophysics Data System (ADS)
Chung, Jae-Moon; Ohnishi, Noboru
1998-10-01
Animals have been considered to develop ability for interpreting images captured on their retina by themselves gradually from their birth. For this they do not need external supervisor. We think that the visual function is obtained together with the development of hand reaching and grasping operations which are executed by active interaction with environment. On the viewpoint of hand teaches eye, this paper shows how visual space perception is developed in a simulated robot. The robot has simplified human-like structure used for hand-eye coordination. From the experimental results it may be possible to validate the method to describe how visual space perception of biological systems is developed. In addition the description gives a way to self-calibrate the vision of intelligent robot based on learn by doing manner without external supervision.
2003-03-22
KENNEDY SPACE CENTER, FLA. - Members of the Merritt Island and Edgewood Middle School students/Lockheed Martin team maneuver their robot during competition. They are participating in the 2003 Southeastern Regional FIRST Robotic Competition being held at the University of Central Florida (UCF) in Orlando, March 20-23. Forty teams from around the country are participating in the event that pits team-built gladiator robots against each other in an athletic-style competition. The teams are sponsored by NASA-Kennedy Space Center, The Boeing Company/Brevard Community College, and Lockheed Martin Space Operations/Mission Systems for the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The vision of FIRST is to inspire in the youth of our nation an appreciation of science and technology and an understanding that mastering these disciplines can enrich the lives of all mankind.
2003-03-22
KENNEDY SPACE CENTER, FLA. - Members of the Merritt Island and Edgewood Middle School students/Lockheed Martin team look over their robot. They are participating in the 2003 Southeastern Regional FIRST Robotic Competition being held at the University of Central Florida (UCF) in Orlando, March 20-23. Forty teams from around the country are participating in the event that pits team-built gladiator robots against each other in an athletic-style competition. The teams are sponsored by NASA-Kennedy Space Center, The Boeing Company/Brevard Community College, and Lockheed Martin Space Operations/Mission Systems for the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The vision of FIRST is to inspire in the youth of our nation an appreciation of science and technology and an understanding that mastering these disciplines can enrich the lives of all mankind.
2003-03-22
KENNEDY SPACE CENTER, FLA. -- The Merritt Island and Edgewood Middle School students/Lockheed Martin team, participating in the 2003 Southeastern Regional FIRST Robotic Competition, work on their team-built robot. The competition is being held at the University of Central Florida (UCF) in Orlando, March 20-23. Forty teams from around the country are participating in the event that pits team-built gladiator robots against each other in an athletic-style competition. The teams are sponsored by NASA-Kennedy Space Center, The Boeing Company/Brevard Community College, and Lockheed Martin Space Operations/Mission Systems for the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The vision of FIRST is to inspire in the youth of our nation an appreciation of science and technology and an understanding that mastering these disciplines can enrich the lives of all mankind.
NASA Technical Reports Server (NTRS)
2003-01-01
KENNEDY SPACE CENTER, FLA. - The NASA/Kennedy Space Center- sponsored student team (in pink wigs, right) demonstrates their robot's abilities during the 2003 Southeastern Regional FIRST Robotic Competition. The competition is being held at the University of Central Florida (UCF) in Orlando, March 20-23. Forty student teams from around the country are participating in the event that pits team-built gladiator robots against each other in an athletic-style competition. The teams are sponsored by NASA/Kennedy Space Center, The Boeing Company/Brevard Community College, and Lockheed Martin Space Operations/Mission Systems for the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The vision of FIRST is to inspire in the youth of our nation an appreciation of science and technology and an understanding that mastering these disciplines can enrich the lives of all mankind.
Adaptive multisensor fusion for planetary exploration rovers
NASA Technical Reports Server (NTRS)
Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri
1992-01-01
The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.
Ortiz Oshiro, Elena; Ramos Carrasco, Angel; Moreno Sierra, Jesús; Pardo Martínez, Cristina; Galante Romo, Isabel; Bullón Sopelana, Fernando; Coronado Martín, Pluvio; Mansilla García, Iván; Escudero Mate, María; Vidart Aragón, José A; Silmi Moyano, Angel; Alvarez Fernández-Represa, Jesús
2010-02-01
Da Vinci system (Intuitive Surgical) is a surgical telemanipulator providing many technical advantages over conventional laparoscopic approach (3-D vision, ergonomics, highly precise movements, endowrist instrumentation...) and it is currently applied to several specialties throughout the world since 2000. The first Spanish public hospital incorporating this robotic technology was Hospital Clinico San Carlos (HCSC) in Madrid, in July 2006. We present the multidisciplinary organization and clinical, research and training outcomes of the Robotic Surgery Plan developed in the HCSC. Starting from joint management and joint scrub nurses team, General and Digestive Surgery, Urology and Gynaecology Departments were progressively incorporated into the Robotic Surgery Plan, with several procedures increasing in complexity. A number of intra and extra-hospital teaching and information activities were planned to report on the Robotic Surgery Plan. Between July 2006 and July 2008, 306 patients were operated on: 169 by General Surgery, 107 by Urology and 30 by Gynaecology teams. The outcomes showed feasibility and a short learning curve. The educational plan included residents and staff interested in robotic technology application. The structured and gradual incorporation of robotic surgery throughout the PCR-HCSC has made it easier to learn, to share designed infrastructure, to coordinate information activities and multidisciplinary collaboration. This preliminary experience has shown the efficiency of an adequate organization and motivated team. Copyright 2009 AEC. Published by Elsevier Espana. All rights reserved.
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
The Summer Robotic Autonomy Course
NASA Technical Reports Server (NTRS)
Nourbakhsh, Illah R.
2002-01-01
We offered a first Robotic Autonomy course this summer, located at NASA/Ames' new NASA Research Park, for approximately 30 high school students. In this 7-week course, students worked in ten teams to build then program advanced autonomous robots capable of visual processing and high-speed wireless communication. The course made use of challenge-based curricula, culminating each week with a Wednesday Challenge Day and a Friday Exhibition and Contest Day. Robotic Autonomy provided a comprehensive grounding in elementary robotics, including basic electronics, electronics evaluation, microprocessor programming, real-time control, and robot mechanics and kinematics. Our course then continued the educational process by introducing higher-level perception, action and autonomy topics, including teleoperation, visual servoing, intelligent scheduling and planning and cooperative problem-solving. We were able to deliver such a comprehensive, high-level education in robotic autonomy for two reasons. First, the content resulted from close collaboration between the CMU Robotics Institute and researchers in the Information Sciences and Technology Directorate and various education program/project managers at NASA/Ames. This collaboration produced not only educational content, but will also be focal to the conduct of formative and summative evaluations of the course for further refinement. Second, CMU rapid prototyping skills as well as the PI's low-overhead perception and locomotion research projects enabled design and delivery of affordable robot kits with unprecedented sensory- locomotory capability. Each Trikebot robot was capable of both indoor locomotion and high-speed outdoor motion and was equipped with a high-speed vision system coupled to a low-cost pan/tilt head. As planned, follow the completion of Robotic Autonomy, each student took home an autonomous, competent robot. This robot is the student's to keep, as she explores robotics with an extremely capable tool in the midst of a new community for roboticists. CMU provided undergraduate course credit for this official course, 16-162U, for 13 students, with all other students receiving course credit from National Hispanic University.
Biological Basis For Computer Vision: Some Perspectives
NASA Astrophysics Data System (ADS)
Gupta, Madan M.
1990-03-01
Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Information-Driven Autonomous Exploration for a Vision-Based Mav
NASA Astrophysics Data System (ADS)
Palazzolo, E.; Stachniss, C.
2017-08-01
Most micro aerial vehicles (MAV) are flown manually by a pilot. When it comes to autonomous exploration for MAVs equipped with cameras, we need a good exploration strategy for covering an unknown 3D environment in order to build an accurate map of the scene. In particular, the robot must select appropriate viewpoints to acquire informative measurements. In this paper, we present an approach that computes in real-time a smooth flight path with the exploration of a 3D environment using a vision-based MAV. We assume to know a bounding box of the object or building to explore and our approach iteratively computes the next best viewpoints using a utility function that considers the expected information gain of new measurements, the distance between viewpoints, and the smoothness of the flight trajectories. In addition, the algorithm takes into account the elapsed time of the exploration run to safely land the MAV at its starting point after a user specified time. We implemented our algorithm and our experiments suggest that it allows for a precise reconstruction of the 3D environment while guiding the robot smoothly through the scene.
A Scalable Distributed Approach to Mobile Robot Vision
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.
1997-01-01
This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).
Control of Synchronization Regimes in Networks of Mobile Interacting Agents
NASA Astrophysics Data System (ADS)
Perez-Diaz, Fernando; Zillmer, Ruediger; Groß, Roderich
2017-05-01
We investigate synchronization in a population of mobile pulse-coupled agents with a view towards implementations in swarm-robotics systems and mobile sensor networks. Previous theoretical approaches dealt with range and nearest-neighbor interactions. In the latter case, a synchronization-hindering regime for intermediate agent mobility is found. We investigate the robustness of this intermediate regime under practical scenarios. We show that synchronization in the intermediate regime can be predicted by means of a suitable metric of the phase response curve. Furthermore, we study more-realistic K -nearest-neighbor and cone-of-vision interactions, showing that it is possible to control the extent of the synchronization-hindering region by appropriately tuning the size of the neighborhood. To assess the effect of noise, we analyze the propagation of perturbations over the network and draw an analogy between the response in the hindering regime and stable chaos. Our findings reveal the conditions for the control of clock or activity synchronization of agents with intermediate mobility. In addition, the emergence of the intermediate regime is validated experimentally using a swarm of physical robots interacting with cone-of-vision interactions.
Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.
Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos
2018-03-25
New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.